Research Projects

A selection of my research and engineering projects spanning physics-informed machine learning, affective computing, healthcare AI, and natural language processing. All source code is publicly available on GitHub.


Physics-Informed Neural Networks for PDE Solutions

UC Davis · Research Assistant · 2024 – Present · GitHub Repo ↗

Developed physics-informed neural network (PINN) architectures that embed governing physical laws — such as partial differential equations (PDEs) — directly into the loss function, enabling the network to learn physically consistent solutions without requiring large labeled datasets. Applied Fourier feature embeddings to overcome spectral bias in standard MLPs, allowing the model to capture high-frequency solution components in multi-scale problems. Implemented adaptive loss weighting strategies to balance PDE residual loss, boundary condition loss, and initial condition loss during training. Validated on benchmark PDEs including Burgers’ equation and heat diffusion, demonstrating improved convergence speed and solution accuracy compared to vanilla PINNs. Published in IEEE Xplore at CSCI 2024.

Tech Stack: Python, PyTorch, CUDA, NumPy, Matplotlib · Methods: Fourier Features, Adaptive Loss Balancing, PDE-Constrained Optimization, Spectral Bias Mitigation

Affective Modeling in AI Narrative Systems

UC Davis · Research Assistant · Jun 2025 – Present · GitHub Repo ↗

Designing emotion-aware AI narrative systems that dynamically adapt storytelling based on real-time user affect, contributing to a paper under review at CHI 2026. Benchmarked 6+ deep learning architectures — including LSTM, CNN, Transformer, ResNet, InceptionTime, and LSTM-FCN — for physiological time series classification to detect emotional states from sensor data. Implemented Grad-CAM for temporal interpretability, producing attention visualizations that highlight which time steps most influence model predictions, providing insights into how models perceive emotional transitions. Collaborated with faculty on experimental design, IRB protocols, statistical analysis (mixed-effects models, Bayesian inference), and manuscript preparation targeting top-tier HCI venues.

Tech Stack: Python, PyTorch, Hugging Face Transformers, Matplotlib, SciPy · Methods: Time Series Classification, Grad-CAM Temporal Interpretability, Affective Computing, Mixed-Effects Statistical Modeling

AI-Powered Healthcare Credential Verification

MOJOHealth · AI Engineer Intern · Jun 2025 – Aug 2025 · GitHub Repo ↗

Designed and deployed agentic AI pipelines for automated medical personnel credential verification in a production healthcare environment. Built an OCR-to-GPT pipeline that extracts structured data from scanned medical licenses, certifications, and diplomas, then validates them against regulatory databases. Developed interactive QA systems enabling compliance officers to query documents in natural language and retrieve structured answers in real time, significantly reducing manual review time. Orchestrated multi-step automation workflows using Make (Integromat), connecting OCR engines, GPT API, internal databases, and notification systems into a seamless end-to-end verification pipeline. Assisted in production app deployment, iterative testing, and A/B evaluation of AI-driven features serving real healthcare clients.

Tech Stack: Python, GPT API, OCR (Tesseract), Make (Integromat), Docker · Methods: Agentic AI Pipelines, Document Intelligence, Retrieval-Augmented QA, Workflow Automation

End-to-End ML Pipeline Engineering

Orcawise · ML Intern · Jun 2024 – Sep 2024 · GitHub Repo ↗

Built production machine learning models using PyTorch and TensorFlow that improved prediction accuracy on diverse client datasets spanning tabular, time series, and text domains. Designed end-to-end data pipelines covering the full ML lifecycle: data ingestion from multiple sources, preprocessing and feature engineering, model training with hyperparameter tuning, evaluation with cross-validation, and deployment-ready model serialization. Optimized training workflows for GPU environments (CUDA), implementing mixed-precision training, gradient accumulation, and efficient data loading to reduce iteration time and enable faster experimentation cycles. Delivered reproducible ML experiments with version-controlled configurations and automated evaluation reports.

Tech Stack: Python, PyTorch, TensorFlow, Scikit-Learn, Pandas, CUDA, Git · Methods: End-to-End ML Pipelines, Hyperparameter Optimization, Mixed-Precision Training, Cross-Validation, Feature Engineering

Explainable AI & Model Interpretability

Cross-Project Research Focus · 2024 – Present · GitHub Repo ↗

Developed and applied explainability techniques across multiple research projects to make deep learning models more transparent and trustworthy. Implemented Grad-CAM adaptations for temporal data, producing visual explanations of which time steps and features drive model predictions in time series classification tasks. Explored attention mechanism visualization in Transformer architectures to understand how self-attention layers encode sequential dependencies in affective computing applications. Built custom visualization tools using Matplotlib and D3.js to communicate model behavior to non-technical stakeholders, bridging the gap between complex ML systems and human understanding. This cross-cutting research theme connects the PINN, affective modeling, and ML pipeline projects through a unified focus on interpretability.

Tech Stack: Python, PyTorch, Matplotlib, D3.js · Methods: Grad-CAM, Attention Visualization, SHAP, Feature Attribution, Temporal Interpretability