OPEN SOURCE · MIT LICENSE

AI-augmented organ procurement coordination.

AORTA is an open-source framework for turning any capable AI model into a domain-expert colleague for organ procurement. Built by OPO professionals, for OPO professionals. Not a chatbot — a behavioral specification, policy corpus, and training methodology that produces a calibrated, safety-constrained AI partner for the hardest job in healthcare.

Get Started → View on Hugging Face

The Problem

372 pages of policy.
56 OPOs. Zero AI tools built for the work.

Organ procurement coordinators manage life-or-death decisions across 21 chapters of OPTN policy, CMS Conditions for Coverage, institutional SOPs, and hospital protocols — often simultaneously, often at 3 AM. The complexity is increasing. The workforce is stretched. The margin for error is measured in human lives.

Current tools are transactional: data entry, match run interfaces, case management forms. None of them think. None of them know the policy. None of them can reason about the intersection of allocation rules, regulatory requirements, and institutional procedures under time pressure.


The Framework

A colleague, not a chatbot.

AORTA is not a product. It's a complete open-source framework — a behavioral specification, a RAG-optimized policy corpus, a reasoning trace methodology, and a training pipeline — that any OPO can use to deploy AI-augmented coordination support. The framework is substrate-portable: it works whether prompted into a frontier model or fine-tuned into a local deployment.

Confidence Calibrated

Every response carries an explicit confidence signal — HIGH when grounded in retrieved policy text, MODERATE when reasoning from domain knowledge, LOW at the knowledge edge. Coordinators always know how much to trust the answer.

The Human Line

AORTA will never make a clinical decision, determine organ viability, override allocation, or replace the judgment of a physician, coordinator, or medical director. This boundary is architectural, not advisory — it's encoded in the model's training, not just its instructions.

Policy Grounded

Answers cite specific OPTN policy sections. The RAG corpus is structured with semantic metadata for precise retrieval. When AORTA doesn't have the policy text, it says so rather than generating plausible-sounding policy language.

Anti-Sycophantic

In organ procurement, telling someone what they want to hear can cost a life. AORTA is trained to maintain analytical integrity under pressure — from frustrated surgeons, stressed coordinators, or ambiguous situations where the easy answer isn't the right one.

AORTA-7B running locally in LM Studio — offline, HIPAA-safe, no data leaves the machine
AORTA-7B · LM Studio · running locally

The Toolkit

Everything you need. Nothing you don't.

Each component stands alone. Use the soul document with a frontier model API and nothing else. Or go deep — fine-tune a local model with the full training pipeline. The framework scales to your technical capacity and operational needs.

Soul Document The behavioral specification that defines what AORTA is — personality, constraints, calibration framework, Human Line architecture GitHub OPTN Policy Corpus 21 OPTN policy chapters converted to 468 semantically chunked markdown files with metadata headers for RAG retrieval GitHub Reasoning Traces 160-question bank across 15 tiers of complexity — from single-policy lookup to multi-hop cross-regulatory edge cases GitHub AORTA-7B Model QLoRA fine-tuned Qwen2.5-7B-Instruct — runs locally on a coordinator's laptop via LM Studio or Ollama Hugging Face Training Pipeline Dataset generation methodology, training scripts, hyperparameters, and the evaluation battery for validating any AORTA deployment GitHub System Prompt Activation key plus extended runtime reinforcement — the minimal prompt that reliably instantiates AORTA behavior GitHub Evaluation Battery 21-question adversarial test across 7 categories with scoring rubric — verify that any deployment actually works GitHub

Who This Is For

From the coordinator's desk to the CTO's office.

OPO Coordinators

A policy reference that reasons, not just retrieves. Ask it about the intersection of DCD protocols with allocation rules at 3 AM and get a cited, calibrated answer.

OPO IT Teams

A complete deployment guide from model selection through RAG configuration to system prompt architecture. Works with local inference or hosted APIs.

OPO Leadership

A framework for evaluating AI deployment in organ procurement — with safety constraints, compliance considerations, and measurable outcomes built in.

Researchers

A case study in domain-specific AI deployment for safety-critical healthcare operations. The methodology generalizes beyond organ procurement.


Origin

Built inside the house.

AORTA was developed at a US organ procurement organization by a systems administrator who understood both the operational reality of procurement coordination and the current capabilities of open-weight AI models. It emerged from a simple observation: the people who do this work deserve tools that think, not just tools that store data.

This is not a startup. There is no funding round, no sales team, no enterprise pricing page. AORTA is released under the MIT license because organ procurement is a public trust. Every OPO in the country operates under the same OPTN policies, serves the same mission, and faces the same complexity. The tools that help coordinators navigate that complexity should be shared, not sold.


What's Next

The framework is just the foundation.

AORTA-Bench A standardized, versioned evaluation benchmark for any AI system operating in organ procurement — model-agnostic, vendor-agnostic, freely available. If you're deploying AI at an OPO, this is how you know it works. IN PROGRESS
CMS CoP Corpus Extending the RAG corpus beyond OPTN policy to include CMS Conditions for Coverage — the other regulatory framework coordinators navigate daily. PLANNED
Adaptation Guide A step-by-step methodology for any OPO to customize AORTA with their own SOPs, workflows, and institutional knowledge while preserving the safety architecture. PLANNED
AORTA-Central A hosted deployment path for larger models with deeper reasoning capacity — for the questions that need more than a 7B laptop model can provide. EXPLORING