A smart contract audit is one of the more expensive engineering invoices most protocol teams ever sign. A serious firm bills $50–250k for a meaningful engagement. The mistake we see often: teams hand over a codebase that’s still mid-refactor, with patchy tests and untested edge cases, and then pay a senior auditor’s day rate to point out that _nonReentrant is missing on a function. That’s not what auditors are for, and it’s not what you’re paying for.
We’ve gone through audits with clients across DeFi, NFT-Fi, and bridge protocols. The teams that get the most out of an audit treat the two weeks before kickoff as a structured pre-audit pass. This is what that pass looks like.
Freeze the codebase and tag it
The first concrete step is the most boring one. Tag the exact commit you’re auditing. Branch off it. Communicate to the team that the only changes landing on that branch are audit-driven fixes — not new features, not refactors, not “while we’re here” cleanup.
The reason is mechanical: an auditor’s report is keyed to a specific commit. If your main keeps moving, by the time the report comes back you’ve got a three-way merge between the report, the audited commit, and the new state of the world. That’s where regressions hide. Pin the commit. Make the rule clear.
Inventory every external interaction
Make a table — a literal spreadsheet — of every external boundary in the system:
- Every function callable by an EOA, by row.
- Every external contract you call, with the trust assumption you’re making about each.
- Every storage slot a user can influence.
- Every assumption about token behavior (fee-on-transfer, rebasing, ERC-777 hooks).
This inventory is doing two things. First, it’s surfacing the surface area auditors are about to enumerate; you might as well do it first. Second, it’s forcing you to articulate, in writing, the trust assumptions baked into your code. We have yet to see a protocol where this exercise didn’t shake loose at least one assumption that turned out to be wrong.
A pattern we see a lot: the team is calling a Chainlink feed and assumes it returns a non-stale, non-zero price. That’s usually true, but the function doesn’t enforce it. Force the assumption to be a check. Inventory found it.
Ship invariant tests, not just unit tests
Most protocol teams have unit tests. Fewer have invariant tests. Auditors will run invariant tests against your code before they read it; you should too.
A unit test says “given this input, expect this output.” An invariant test says “for any sequence of legal operations, this property must hold.” For a lending protocol: total deposits minus total withdrawals minus current balance equals zero. For a DEX: the constant-product invariant holds within rounding. For a vault: shares × pricePerShare equals totalAssets within rounding.
Foundry’s invariant testing is the easiest place to start. Define the invariant. Write a handler that exercises legal operations. Run it for ten thousand fuzz iterations. The bugs that surface here are the same bugs auditors will find — except you’re finding them for free, and you can fix them before the timer starts.
function invariant_solvency() public view {
uint256 totalShares = vault.totalSupply();
uint256 totalAssets = vault.totalAssets();
uint256 ppfs = vault.pricePerShare();
assertApproxEqAbs(totalShares * ppfs / 1e18, totalAssets, 1);
}
If you write five of these for the core economic properties of your system, you’ll be ahead of most protocol teams entering an audit.
Document the threat model
Auditors come in with their own threat model. You should hand them yours. A threat model is not a long document. It’s a short one that answers four questions for each major actor:
- What can they do?
- What are they incentivized to do?
- What’s the worst they can do without breaking a security assumption?
- What’s the worst they can do if they break one?
Doing this for every role — user, LP, keeper, governance, oracle — typically takes a day. The output is a one-page document that becomes the basis for the auditor’s adversarial mindset. It also forces your own team to think about edge cases you’d otherwise miss because they don’t fit the happy path you’ve been building.
Strip the code of dead weight
This is unglamorous and high-leverage. Walk the codebase and remove anything that isn’t doing work:
- Functions that aren’t called by anything inside or outside.
- Modifiers applied “just in case.”
- Imports of OpenZeppelin contracts you only use one function from.
- Assembly blocks that were optimization experiments and never benchmarked.
Every line of code an auditor reads costs you money. Lines that aren’t doing anything cost you the same as lines that are. Aggressive deletion is one of the highest-ROI activities in the pre-audit window.
Run a Slither pass and fix everything reasonable
Slither is free. Run it. Read the output. Fix the high-confidence findings, document the false positives. We’ve seen teams skip this because “the auditors will run it anyway.” That’s true. They’ll run it, and then send you an invoice for the time it took to triage findings you could have triaged yourself in an afternoon.
The same goes for any other low-cost static analysis your stack supports — Aderyn for Foundry projects, gas snapshots, slither’s human-summary printer. Tooling output is a free first pass.
Write the README the auditor wants
The auditor needs to ramp on your codebase in two days. Help them. A useful pre-audit README has, in order:
- One paragraph explaining what the protocol does, in business terms.
- A two-paragraph technical overview of the architecture.
- A list of the most security-critical functions, with one-line summaries.
- A list of the most security-critical invariants the system maintains.
- Known limitations and out-of-scope concerns.
- How to set up and run the test suite, including invariant tests.
This is not the public-facing README. This is internal-quality, written for someone who has to understand your system in 48 hours. Spend an afternoon on it. The audit report will be measurably better.
Pre-flight the deployment scripts
This one bites a lot of teams. The contracts pass audit. Then the team hand-rolls a deployment script under time pressure, gets a constructor argument wrong, and the protocol launches with a parameter that breaks an invariant the audit was specifically protecting.
Audit your deployment scripts the same way you audit your contracts. Have someone other than the author read them. Verify every constructor argument against the deployment plan. Dry-run on a fork. The deployment is part of the system; treat it as such.
What this gets you
Two outcomes worth caring about. The first is dollar value: every issue you surface yourself is a finding you didn’t pay an auditor’s hourly rate to surface. We’ve watched teams cut audit cost by a third by doing this work seriously.
The second is harder to quantify but more important: the audit becomes a conversation with peers about deep design choices, instead of a remedial review of basics. That’s where audits earn their keep — when both sides are operating at the same level of rigor and the auditor can spend their time on the things only an outside expert will catch.
If you’re approaching an audit and want a structured pre-audit pass run on your protocol, we do this work.