On January 25, 2026, SwapNet — a DEX aggregator — lost over $13 million across Ethereum, Arbitrum, Base, and BSC. Aperture Finance got hit the same day through the same vulnerability. Both protocols shared a weakness: their smart contracts accepted arbitrary low-level calls without checking what those calls actually targeted.
The attack was simple in principle. Users had granted token approvals to these contracts for normal swap operations. The contracts were supposed to route those calls to legitimate pools and routers. But because input validation was weak, the attacker could substitute a token address as the call target — making the contract execute transferFrom() on the user's approved tokens, draining them straight to the attacker's wallet.
No private keys were stolen. No complex exploit chain. Just a crafted function call that the contract should have rejected but didn't.
The closed-source problem
Both SwapNet and Aperture Finance ran closed-source contracts. That changes how you approach forensic analysis completely.
Without source code, you're working with decompiled bytecode — thousands of lines of nested branching logic where the intent behind any given function is ambiguous. You can reconstruct what happened from execution traces, but you can't easily determine what the contract was supposed to do. That distinction between intended behavior and actual behavior is exactly what legal teams need, and it's the hardest thing to establish when the code isn't readable.
I've worked on cases like this where the first question from legal is: "Was this a bug or a feature?" With closed-source contracts, you're essentially reverse-engineering the answer from bytecode and transaction traces. It's doable, but it requires a different skillset than reading Solidity.
How the exploit worked
The vulnerable function accepted user-supplied parameters that controlled a low-level call. Normally, those parameters would point to a router or liquidity pool. The attacker replaced them with token contract addresses — USDC, USDT, whatever the user had approved.
The contract then executed USDC.transferFrom(victim, attacker, amount). Every user who had granted unlimited approvals was exposed.
This pattern repeated across four chains. Same flaw, different deployments. The attacker didn't need to find four different bugs — one architectural weakness was enough.
What this means practically
If you're assessing contracts that hold token approvals and accept user-controlled call targets, this is the risk: any function that passes user input into a low-level call without strict whitelisting is a potential drain vector.
The SwapNet case also exposed a user-side risk most people overlook. Users who disabled "one-time approval" settings — opting for unlimited approvals instead — had their entire approved balance at risk, not just the amount for a single trade. That permission model amplified the blast radius dramatically.
For anyone doing post-incident analysis on closed-source contracts: start with execution traces, reconstruct the call flow from bytecode decompilation, and document where your interpretation relies on inference rather than readable code. That transparency is what makes the analysis hold up.