
Engineer & founder building production-grade autonomous agents, grounded in strong software engineering to solve real-world problems.
I build agentic systems — reliable, observable, and designed to survive production, not just demos.
I'm a Software Engineer and MS Computer Science (AI/ML) student at Duke University, focused on building production-grade autonomous systems — from multi-agent orchestration and LLM tooling to the distributed backends that run them reliably at scale.
I'm the sole founder and engineer of VYNN AI, an agentic financial analyst platform built end-to-end and deployed to ~500 pilot users.
Previously, I designed core components of AutoCodeRover, an autonomous code repair system acquired by Sonar, integrating agentic reasoning directly into JetBrains IDEs. In parallel, I've led research as sole first author on multi-agent LLM frameworks for medical text mining, achieving 98.2% sensitivity across 15 systematic reviews (~150K citations).
This summer, I’m at Robinhood in Menlo Park as a Machine Learning Engineer on the Central AI team, continuing my focus on building autonomous systems that operate reliably at real-world scale.
Systems that are reliable, observable, and production-ready — not just demo-ready. I care deeply about turning ideas into robust software that solves real problems and serves real users.
Agent harness design — the infrastructure layer that makes agents actually work in production. The agent itself is the easy part; the harness that makes it reliable, observable, and debuggable is what I want to build.
M.S. Computer Science (AI/ML)
2025 – 2027 · Graduate Teaching Assistant
Duke Scholar
View official scholar profileB.Comp. in Computer Science (Honours)
2021 – 2025 · Distinction
Distinction in Software Engineering
View verified credentialExchange Semester
The University of Hong Kong · Fall 2023

On the Central Agentic team, working on agent reliability and evaluation — the infrastructure layer that lets Robinhood ship AI products into a regulated financial domain. Building Kafka-based news/market data pipelines, Braintrust-driven evals, and post-training workflows (SFT on Databricks) spanning closed-source (GPT-4.1) and open-source models.

Designed, built, and deployed a full-stack agentic financial analysis platform as sole engineer — from LangGraph multi-agent backend and FastAPI orchestration layer to React dashboard and production infrastructure on Hetzner Cloud. Serves ~500 pilot users with institutional-quality equity research (DCF modeling, news intelligence, automated reports) in under 7 minutes end-to-end.

Architected and led CS 590 (Software Development Studio), where graduate students build AI debugging agents inspired by AutoCodeRover and deploy full-stack applications. Also mentored teams in CS 408 and CS 390 on software architecture, DevOps, and LLM-oriented programming — shipping production software for real clients.
Built the JetBrains IDE plugin end-to-end for autonomous code repair — GumTree-based 3-way AST merge, embedded SonarLint analysis, and real-time SSE streaming with per-step developer feedback. Enhanced the agentic repair backend with LLM-as-a-Judge self-improvement, lifting SWE-bench Verified to 51.6% (state-of-the-art among open-source agents). Core technology acquired by Sonar.

Built backend validation infrastructure for Binance's Boosters campaign — automated API regression suites in CI, load-tested services to ~500K concurrent transactions via JMeter, and instrumented monitoring to catch consistency failures before production. Worked directly with backend and Web3 Wallet engineers to root-cause and patch defects, cutting resolution time by 40%.

Led research as first author on a multi-agent AI framework for medical evidence synthesis. Designed and built LUMINA, a four-agent LLM framework that automates citation screening for medical systematic reviews — achieving 98.2% sensitivity and 87.9% specificity across 15 SRMAs (~150K citations) with a 35× reduction in false negatives vs. prior state-of-the-art.
Bloomberg-grade equity research, built for retail.
A LangGraph supervisor orchestrates 7 specialized agents — fundamentals, news intelligence, DCF modeling, report generation, and a 3-layer recommendation engine with deterministic validation. Every figure traces to a deterministic source, every recommendation enforces ≥97% citation coverage. Built end-to-end as sole engineer in Python + React/TypeScript on Docker infrastructure (Hetzner Cloud), and shipped to ~500 pilot users in production.
System Architecture
< 7 min
End-to-end equity analysis — fundamentals, news intel, DCF modeling, validated PDF report
0.985
Reproducibility score across paired runs (CV 0.016) — symbolic outputs match exactly under identical inputs
97%
Citation coverage enforced on every recommendation — zero invented numbers, every figure traceable
~500
Pilot users on production Hetzner Cloud infrastructure with zero-downtime deployments
Brought autonomous code repair from research to a production developer tool. AutoCodeRover is a multi-agent system that resolves real GitHub issues end-to-end — reproducing bugs, searching codebases across 7 languages via tree-sitter, generating patches with iterative refinement, and self-correcting through an LLM-as-a-Judge reviewer. I built the JetBrains IDE plugin end-to-end in Kotlin: a conversational agent UI with real-time SSE streaming, GumTree-based three-way AST merge for conflict-free patch application, embedded SonarLint static analysis, and a feedback loop where developers can critique any reasoning step to trigger guided re-runs. On the backend, I designed the self-fix agent that diagnoses inapplicable patches and autonomously replays the pipeline from the most suspicious stage — lifting SWE-bench Verified to 51.6%. The core technology was acquired by Sonar. Sonar Foundation Agent, built on the AutoCodeRover core, has since reached 79.2% on SWE-bench Verified — #1 on the leaderboard (Feb 2026).
Repair Pipeline Architecture
51.6%
SWE-bench Verified (Jan 2025)
State-of-the-art across 2,294 real GitHub issues — highest among open-source agents
13.2%
Resolve Rate Improvement
Lifted SWE-bench Verified from 38.4% (Jun 2024) to 51.6% (Jan 2025) — via Self-Fix Agent with LLM-as-a-Judge and interactive feedback loops
3-Way
AST Merge (GumTree)
Conflict-free patch application when local code has diverged from agent's baseline
7
Languages Supported
Tree-sitter search across Python, Java, JS, TS, C/C++, Go, PHP
Describe a bug → ACR localizes, patches, and validates autonomously
Embedded static analysis for Java/Python with one-click ACR fixes
GumTree conflict resolution across baseline/modified/patched
Critique any agent reasoning step — triggers guided pipeline re-run
LLM-as-a-Judge diagnoses inapplicable patches and replays from failure point
Auto-captures IDE build and test failures with one-click ACR submission
The industry keeps hitting the same wall: every agent team rebuilds the same plumbing — context management, rollback, multi-agent coordination, transparency — and glues it together with progress files and ad-hoc scripts. The blog post argues that what's actually needed isn't another framework. It's an operating system: a general-purpose substrate where developers plug in agents and the kernel handles orchestration, memory, state, and auditability.
taste is the implementation. A three-core CPU model (Opus 4.7 planner, Sonnet 4.6 workers, Haiku 4.5 monitor) with git as the memory substrate — branches are execution contexts, commits are checkpoints, git reset --hard is rollback, and git worktree gives every parallel worker real filesystem-level process isolation. Three end-to-end demos shipped. More coming soon.
Kernel Loop & Git Substrate
$0.096
Real-Claude end-to-end demo
todo_api: 43s, 15/15 tests green, zero rollbacks on Sonnet 4.6 (7 calls, 16.5K input / 3.1K output tokens)
33%
Wall-clock reduction (parallel)
Three worktrees running concurrently: 21.5s vs ~32s sequential on matched three-step task
Atomic
Commit-or-rollback per step
Failed step → git reset --hard to last passing checkpoint; session branch stays clean, no zombie commits
Opt-in
Every subsystem is composable
Build to delete: when models self-evaluate reliably, disable the Monitor — kernel API survives untouched
Planner (Opus 4.7) → Worker (Sonnet 4.6) → Monitor (Haiku 4.5) — the reasoning sandwich, structurally separated
Every kernel artifact is a commit. plan.json, monitor/step-NN.json, agent spec — all versioned, navigable, audit-replayable
Each parallel worker gets a real filesystem-level git worktree. No virtual branches, no shared working directory
pytest (or LLM-judge) evaluates each step before commit. On fail: git reset --hard, retry in fresh worktree
.git/taste/events.jsonl survives git reset --hard — rollback doesn't erase the audit trail
taste dashboard renders timeline, per-step outcomes, and branch topology into one self-contained HTML — htop for agents
Actively building · expect more soon
Architecture deep diveArchitected and led hands-on production engineering for 48 students across 15 teams — Docker, CI/CD, and agentic AI systems shipped as reproducible labs and debugger benchmarks, not slides.