Cross-chain bridges have lost more money than any other category of DeFi protocol. The cumulative number across the major incidents — Ronin, Wormhole, Nomad, Multichain, Harmony — is over $2.5B. There are reasons for this concentration of losses, and most of them are addressable.

We’ve worked on bridge integrations for several protocols and on a custom bridge for an L2/L3 ecosystem. This is the threat model we’d start from for any serious bridge build today.

Why bridges are hard

A bridge is, fundamentally, a mechanism that says “this asset on chain A corresponds to this asset on chain B.” That correspondence is enforced by some set of validators or by a smart contract that locks and mints. The security of the bridge is the security of the correspondence — and the correspondence sits across two systems with different trust assumptions, different failure modes, and different upgrade paths.

The historic failures cluster into a small number of root causes:

Validator compromise (Ronin, $625M). The bridge depended on a small set of validators. The keys for a quorum got compromised. The bridge minted assets on the destination chain that weren’t backed on the source chain.

Verifier bug (Wormhole, $325M). The bridge contract trusted a signed message; a flaw in signature verification let an attacker forge a message with no actual signatures.

Replay attacks (Nomad, $190M). A change to the trusted-roots logic let attackers replay valid messages with different recipients. Once one wallet figured it out, dozens piled in.

Authentication bug in the relayer (Harmony, $100M). The bridge trusted relayer attestations; the multisig controlling the relayer had insufficient signatures protecting fund movements.

The pattern: every major bridge exploit was a failure of authentication or validation in the trust mechanism, not a failure of the asset-locking primitive itself. That’s where the architecture has to focus.

The security architecture we’d recommend

A serious bridge in 2026 doesn’t depend on a single trust mechanism. It composes several, with different failure modes, in series.

Layer 1: Light client verification. When possible, the destination chain should verify the source chain’s state directly via a light client. This eliminates trust in any third-party relayer; the bridge trusts the consensus of the source chain, which is a much stronger assumption. Cost: light clients are expensive to maintain on-chain (Ethereum verifying Solana is computationally heavy). Some pairs have efficient light clients (zk-bridges between L2s and L1); some don’t.

Layer 2: Independent attestation set. A set of independent validators (or oracles, or relayers) each attests to the source chain state. The bridge requires a supermajority of attestations to act. Crucially, the attestation set should be independent — different node operators, different infrastructure providers, different jurisdictions. A single AWS region going down should not take quorum out.

Layer 3: Optimistic delay with challenge window. Even with light-client verification and a strong validator set, message execution has a delay (15 minutes to a few hours, depending on risk) during which any participant can submit a proof of fraud. This is what Optimistic rollups and Across-style bridges do. The cost is latency; the benefit is a backstop against subtle bugs in the upstream verification.

Layer 4: Per-asset rate limits. Independent of any of the above, the bridge enforces rate limits per asset and per time window. If something has gone wrong with the upstream verification, the rate limit caps the loss. This is mechanical defense in depth — it doesn’t require knowing what went wrong, just that the bridge shouldn’t move 100% of TVL in a day.

The combination of these four layers gives you genuine defense in depth. A single layer failing — bug in light client, validator quorum compromise, missed challenge — shouldn’t compromise the bridge if the others hold.

The audit and ongoing-security stance

Bridges live or die on operational security as much as on smart contract correctness. The audit is necessary but not sufficient.

Formal verification of the verifier. The signature/proof verification logic is the highest-leverage code in the bridge. We’d push for formal verification on this specific component, not just an audit. The cost is high (formal verification is specialized work) and the benefit is asymmetric (a bug here is the most expensive bug you can have).

Bug bounty with realistic payouts. A bridge with $100M TVL needs a bug bounty with at least a 7-figure top payout. Bug-bounty researchers won’t dig deep into your code for a $25k payout; they will for $1M. Immunefi has the infrastructure for this.

Real-time monitoring with auto-pause. Production bridges should have monitoring that watches every cross-chain transfer, compares it against expected patterns, and pauses on anomalies. “More volume in the last hour than has ever happened in any 24-hour period” should trigger a pause and a page. Prevention is cheaper than recovery.

A documented incident playbook. When something does go wrong, the team has minutes, not hours, to act. The playbook should specify: who has pause authority, how to verify a pause is needed, who notifies the community, who handles the post-mortem. The Wormhole recovery (where Jump funded the gap to keep the protocol solvent) only worked because the playbook existed.

What we’d avoid

Three patterns we’d avoid for new bridge builds.

Single-chain or low-validator-count multisigs. A 5-of-9 multisig where the keys live on a few laptops is the Ronin attack vector. If you’re not running an actual decentralized validator set, you don’t have a bridge — you have a custodial transfer service with extra steps.

Wrapped tokens with informal redemption mechanics. Some bridges issue wrapped tokens where the redemption right depends on the bridge operator continuing to honor it. This is a bank, not a bridge. Users should be able to redeem trustlessly, even if the bridge operator goes away.

Custom signature schemes. Don’t roll your own. Use well-vetted libraries. The Wormhole exploit was, mechanically, a custom signature verification with a subtle bug. Standard libraries with extensive audit history exist; use them.

Where the field is going

Two trends worth watching.

Native L2 → L1 bridges are getting better. Optimistic rollup bridges (Optimism, Arbitrum, Base) and zk rollup bridges (zkSync, StarkNet, Polygon zkEVM) inherit security from Ethereum. For movement between Ethereum and these L2s, you should be using the canonical bridge. Third-party bridges for these pairs add risk without much benefit.

Intent-based / solver-based architectures. Across, Squid, and similar systems use a solver model: the user says “I want X on chain B, here’s what I’ll pay,” solvers compete to fill the order, the protocol settles. This shifts risk away from a long-lived multisig holding TVL toward short-lived solver inventory. The properties are different — and arguably better — for the user.

If you’re building a bridge or integrating one and want a security review of the architecture before mainnet, we do this work.