
Engineer & founder building production-grade autonomous agents, grounded in strong software engineering to solve real-world problems.
I build agentic AI systems that actually work in the real world — reliable, observable, and designed to survive production, not just demos.
I'm a Software Engineer and MS Computer Science (AI/ML) student at Duke University, focused on building production-grade autonomous systems — from multi-agent orchestration and LLM tooling to the distributed backends that run them reliably at scale.
I'm the sole founder and engineer of VYNN AI, an agentic financial analyst platform built end-to-end and deployed to ~500 pilot users.
Previously, I designed core components of AutoCodeRover, an autonomous code repair system acquired by Sonar, integrating agentic reasoning directly into JetBrains IDEs. In parallel, I’ve led research on multi-agent LLM frameworks for large-scale medical text mining, with work currently under review at NEJM AI.
This summer, I’ll be joining Robinhood in Menlo Park as a Machine Learning Engineer on the Agentic AI team, continuing my focus on building autonomous systems that operate reliably at real-world scale.
Systems that are reliable, observable, and production-ready — not just demo-ready. I care deeply about turning ideas into robust software that solves real problems and serves real users.
M.S. Computer Science (AI/ML)
2025 – 2027 · GPA: 3.77
Duke Scholar
View official scholar profileB.Comp. in Computer Science
2021 – 2025 · Distinction
Distinction in Software Engineering
View verified credentialExchange Semester at HKU (Fall 2023)

Building ML-powered agentic systems for Robinhood's financial products this summer.

Built and deployed an end-to-end agentic financial analysis platform as sole engineer, serving ~500 pilot users. Multi-agent orchestration, real-time data pipelines, and autonomous LLM-driven analysis — all in production.

Teaching 3 CS courses — covering software architecture, DevOps, AI agents, and LLM-oriented programming. Mentoring student teams shipping production software for real clients.
Architected the JetBrains IDE plugin for autonomous code repair with AST-level patch alignment via GumTree. Enhanced the agentic repair algorithm with LLM-as-a-Judge self-improvement and continuous user feedback loops. Achieved 46% on SWE-bench Verified. Core technology acquired by Sonar.
End-to-end test automation for Binance's Boosters campaign across ~500K simulated transactions. Achieved 85% automation coverage and cut cross-team bug resolution time by 40%.

Primary contributor to manuscript under review at NEJM AI. Built a multi-agent LLM framework for citation screening across ~150K abstracts, achieving 99.5% sensitivity.
A full-stack financial intelligence platform combining conversational AI analysis, real-time market data streaming, portfolio management, and automated reporting — built end-to-end as sole engineer and deployed to ~500 pilot users in production.
System Architecture
< 7 min
Full equity analysis — data scraping, DCF modeling, news intel, and PDF report generation
72%
Latency reduction via parallel agent execution and result caching
Real-Time
Dual WebSocket streams for live prices and news with auto-reconnect and health checks
~500
Pilot users on production Hetzner Cloud infrastructure with zero-downtime deployments
SSE streaming with log batching, multi-conversation management, downloadable XLSX + PDF reports
Live prices, stock charts, news aggregation
Multi-portfolio, real-time P&L, holdings CRUD
6 interactive chart types (area, bar, pie, radar, scatter, treemap), one-click PNG export
Company, sector, and global market reports with batch generation and smart polling
OAuth, passwordless login, HTTP-only cookies, cross-tab sync, user-scoped storage
Brought autonomous code repair from research to a production developer tool. AutoCodeRover is a multi-agent system that resolves real GitHub issues end-to-end — reproducing bugs, searching codebases across 7 languages via tree-sitter, generating patches with iterative refinement, and self-correcting through an LLM-as-a-Judge reviewer. I built the JetBrains IDE plugin end-to-end in Kotlin: a conversational agent UI with real-time SSE streaming, GumTree-based three-way AST merge for conflict-free patch application, embedded SonarLint static analysis, and a feedback loop where developers can critique any reasoning step to trigger guided re-runs. On the backend, I designed the self-fix agent that diagnoses inapplicable patches and autonomously replays the pipeline from the most suspicious stage — lifting SWE-bench Verified to 46%. The core technology was acquired by Sonar.
Repair Pipeline Architecture
46%
SWE-bench Verified
State-of-the-art across 2,294 real GitHub issues — highest among open-source agents
50%
Patch Precision
1.8× higher than next best (Agentless at 27%) — reviewer agent reduces noise for developers
3-Way
AST Merge (GumTree)
Conflict-free patch application when local code has diverged from agent's baseline
7
Languages Supported
Tree-sitter search across Python, Java, JS, TS, C/C++, Go, PHP
Describe a bug → ACR localizes, patches, and validates autonomously
Embedded static analysis for Java/Python with one-click ACR fixes
GumTree conflict resolution across baseline/modified/patched
Critique any agent reasoning step — triggers guided pipeline re-run
LLM-as-a-Judge diagnoses inapplicable patches and replays from failure point
Auto-captures IDE build and test failures with one-click ACR submission
Designed and built LUMINA, a four-agent framework that automates citation screening for medical systematic reviews and meta-analyses. A classifier agent triages citations, a detailed screening agent applies PICOS-guided Chain-of-Thought evaluation, a reviewer agent audits each decision via LLM-as-a-Judge, and an improvement agent self-corrects when disagreements arise — mirroring the human peer-review process. Evaluated on 15 SRMAs across ~90K citations from BMJ, JAMA, and Lancet journals: achieved 98.2% mean sensitivity (10 of 15 at perfect 100%) with a 1.8% false negative rate, dramatically outperforming published baselines by Li et al. (37% sensitivity) and Strachan (58%).
98.2%
Sensitivity
10 of 15 reviews at perfect 100% — near-zero missed studies
1.8%
False Negative Rate
vs. 63% (Li et al.) and 42% (Strachan) — 35× reduction
15
Systematic Reviews
~90K citations from BMJ, JAMA, Lancet — $0.07 per 10 articles
Teaching software engineering, DevOps, and agentic AI systems to undergraduate and graduate students at Duke.