LegalLean makes legal logic machine-checkable. When a system decides you qualify for a visa, owe a tax, or violate a regulation, LegalLean lets you verify exactly which parts of that decision are provable and which require human judgment.
Every day, computer systems make decisions about people's lives: whether you qualify for a tax deduction, whether your visa application meets the criteria, whether a telecommunications company is complying with consumer protection rules. These systems are increasingly powered by AI.
But here's the problem: you can't check their reasoning. When a system says "not eligible", you don't know whether that conclusion rests on a clear, verifiable rule (you didn't file on time) or on a judgment call that a human should be making (whether your work counts as "extraordinary").
Give you an answer but no way to distinguish a machine-provable fact from a contestable judgment. Both appear as the same output. You can't tell where the computer's certainty ends and human judgment should begin.
Every legal conclusion is marked as either formally verified (the computer proved it) or requires human determination (with an explanation of why, who should decide, and what the contested boundary is).
LegalLean uses the same technology that mathematicians use to verify proofs (a tool called Lean 4) and applies it to law. Think of it as a spell-checker, but instead of checking spelling, it checks legal logic.
A human (or AI) translates a statute, regulation, or policy into a structured format that the computer can understand. For example: "If you're an individual taxpayer, and your mortgage is secured by your home, and it's under $750,000, you may deduct the interest."
Lean 4 verifies that every rule is consistent, every exception is properly handled, and no contradictions exist. If a rule says "you can deduct" but another says "you can't", the system catches it.
Legal language is deliberately vague sometimes. "Qualified residence" could mean a house, a boat, or a mobile home. Instead of pretending the computer can decide this, LegalLean marks it as requiring a human (specifically, "IRS / Tax Court") and documents what the settled meaning is versus what's contested.
The entire system can be independently checked in 15 seconds. No trust required. If you doubt the result, run the verification yourself. If there's an error, the computer will find it.
In 2017, the US Congress passed the Tax Cuts and Jobs Act (TCJA), which changed the rules for deducting mortgage interest. Before the TCJA, if you had a home equity loan, you could deduct the interest. After the TCJA, you couldn't.
LegalLean doesn't just tell you this. It proves it. The system contains a mathematical theorem (verified by the computer) that states:
"For every home equity interest payment by an individual taxpayer on secured property within the applicable dollar limits: the pre-2018 legal outcome is 'potentially deductible (human must verify if property is a qualified residence)' and the post-2018 outcome is 'definitely not deductible (machine-verified rejection)'."
This is not a test case. It covers every possible payment that meets the conditions. The computer verified it. No human can make it wrong without changing the law.
"Your deduction was denied."
Why? Was it the TCJA? Was my property not "qualified"? Was my loan too large? The system doesn't say.
"Your deduction was denied because your taxable year is 2018 or later and your loan is classified as home equity. This is a machine-verified rejection based on the TCJA."
Clear. Traceable. Verifiable. And if the system can't determine something (like whether your boat counts as a "qualified residence"), it says so explicitly.
When a government system denies your application or claims you owe money, you deserve to know exactly why, and whether the decision was based on a provable rule or a judgment call.
Identify exactly where a law is clear and where it's contested. Focus your arguments on the genuinely debatable points, not on relitigating what the computer can already prove.
Verify that your regulations are internally consistent before publishing them. Find contradictions, gaps, and ambiguities automatically. Know exactly which terms will require human interpretation.
Build compliance systems with machine-checked guarantees. When your system says "compliant", you can point to a verified proof, not just a test suite.
A new framework for studying the boundary between formalisable and non-formalisable law. 44 verified theorems across three jurisdictions, with a typed encoding of Hart's open texture.
Greater clarity and fairness. When laws are machine-checkable, it's harder to hide unjust outcomes behind opaque algorithms. The boundary between "the computer decided" and "a human should decide" becomes visible.
Honesty matters. LegalLean does not:
Many legal questions require human judgment. LegalLean marks exactly where that judgment is needed. It makes the boundary visible; it does not eliminate it.
Law is deliberately vague in many places. "Reasonable", "extraordinary", "substantial" are terms that resist precise definition. LegalLean classifies this vagueness rather than pretending it doesn't exist.
The system verifies that the encoding is internally consistent. Whether the encoding faithfully represents the actual law requires human review. The computer checks logic, not legal interpretation.
Currently demonstrated on three case studies (US tax, US immigration, Australian telecoms regulation). Next domains include healthcare: NHS guidelines, NICE standards, BNF drug dosage rules, and US insurance coverage decisions.
Legal reasoning is not limited to law. Clinical guidelines, drug dosage rules, and insurance coverage decisions follow the same pattern: clear rules with contested boundaries. LegalLean's next case studies will apply verified reasoning to:
The National Institute for Health and Care Excellence publishes evidence-graded recommendations. Grade A evidence maps to FormalisationBoundary.formal; "consider" recommendations map to boundary. The evidence grading IS the formalisation boundary.
The British National Formulary specifies dosage ranges, contraindications, and interactions. These are highly formalisable: numeric thresholds that the type checker can verify, with clinical judgment needed at the boundaries (renal impairment adjustments, off-label use).
Coverage decisions ("medically necessary", "experimental treatment") are legal reasoning applied to healthcare. The same defeasible structure applies: base coverage defeated by exclusions, restored by exceptions. The same open texture persists: "medically necessary" is as vague as "qualified residence".
Treatment pathways with decision points: some machine-verifiable (blood test results), some requiring clinical judgment (patient preference, comorbidity weighting). LegalLean makes the boundary between protocol and judgment explicit.
Everything in LegalLean can be independently verified. You don't need to trust us. You don't need to trust any AI. The mathematics is checkable.
1. Run it yourself: Clone the repository and run lake build. In 15 seconds, the Lean 4 type checker verifies all 44 theorems from scratch. Zero trust required.
2. Use Docker: Run docker build . to verify in a completely clean environment. The container starts from a blank Ubuntu installation, installs Lean 4, and verifies everything.
3. Ask an AI: Give any frontier AI model the source code and ask it to find errors. We've done this with three different AI systems. Their reviews are published in the repository.
4. Automated theorem proving: The project uses Aristotle by Harmonic, an IMO gold-medal-level automated theorem prover for Lean 4, to independently verify proofs. Aristotle's Monte Carlo Graph Search engine provides a second line of verification beyond the Lean type checker.
LegalLean is open source under the MIT licence. The academic paper, all 44 theorems, the verification protocol, and the adversarial AI reviews are all published.
View on GitHub Read the Paper Verification Protocol