Floating Point & Deep Network Project
Advisors: Prof. Eliot Moss, Prof. Philip Thomas
We investigate how errors from floating-point arithmetic, and how more accurate methods such as compensated summation, affect the training of deep neural networks.
My current interests are in machine learning, artificial intelligence, and agentic systems, particularly their applications in computational biology, trust and safety, and beyond.
Advisors: Prof. Eliot Moss, Prof. Philip Thomas
We investigate how errors from floating-point arithmetic, and how more accurate methods such as compensated summation, affect the training of deep neural networks.
Advisor: Prof. James Allan
This Search Engine Honors project explores how different manipulation techniques influence the downstream product recommendation behavior of retrieval-augmented generation (RAG) systems. We also experiment with various defense mechanisms to mitigate such manipulations.
Advisor: Rohan Pandey
We implemented an NLP pipeline for medical text de-identification with several surrogacy techniques on the i2b2 dataset, and evaluated them using privacy metrics, downstream task performance, and adversarial GPT-5 benchmarking.
Advisors: Prof. Hung Le, An La
We researched heuristics and implemented a fast tree decomposition algorithm in Python, achieving significant speedups over NetworkX library on benchmark road network datasets.