Overview

My research focuses on the intersection of machine learning algorithms and efficient hardware design — specifically, enabling high-throughput, low-power computation for real-time applications. I work across the stack from digital architectures and memory systems to hardware-aware neural network optimization.


Current Work

Improving Quantization for Large Language Models

  • Role: Research Associate at AMD Research and Advanced Development (2025).
  • Project: Optimizing computational overhead of rotational equalization while preserving model performance in large language models.
  • Impact: Reduces inference cost and improves throughput for transformer-scale models.
  • Tech: Python, ML frameworks (FINN, QONNX, Brevitas), FPGA, GPU.

RF Machine Learning Systems

  • Role: PhD Candidate at University at Buffalo (2023-Present).
  • Project: Designing end-to-end RFML systems for real-time modulation classification on FPGA.
  • Impact: Enables high-throughput, low-area RF signal processing for communication systems.
  • Tech: Python, Verilog, Vivado, FPGA.
  • Links: GitHub Repository, Paper: ISCAS 2025.

Past Work

VCO-Based Spiking Neural Networks

  • Role: PhD Candidate at University at Buffalo (2023-Present).
  • Project: Developing system-level guidelines for training and designing VCO-based SNNs for scaling-friendly hardware implementation.
  • Impact: Provides a framework for building SNN hardware with accuracy analysis considering non-idealities.
  • Tech: Python, Simulink, Cadence Virtuoso.
  • Links: GitHub Repository, Papers: OJCAS 2025, MWSCAS 2024.

Image Compression using Implicit Neural Representations

  • Role: Visiting Researcher at University of Tokyo (Summer 2023), PhD Candidate at University at Buffalo (2023-Present).
  • Project: Developing contrained training methods for INR to enable multi-image representation with fewer networks.
  • Impact: Achieved significant compression ratios while maintaining image quality.
  • Tech: Python, PyTorch.
  • Links: GitHub Repository, Paper: IoT Journal 2024.

Content Addressable Memory for Memory-Augmented Neural Networks

  • Role: PhD Candidate at University at Buffalo (2023-Present).
  • Project: Designing CAM architectures using emerging memory technologies for efficient MANN implementation.
  • Impact: Improved search latency and energy efficiency for MANNs.
  • Tech: Python, Cadence Virtuoso.
  • Links: Paper: JXCDC 2024.

IIR Filter-Based Spiking Neural Networks

  • Role: Research Scholar at IIT Kharagpur (2021-2023), Visiting Researcher at University of Minnesota (Spring 2023), PhD Candidate at University at Buffalo (2023-Present).
  • Project: Designing inference accelerators and multiprocessor training schedulers for SNN architectures with IIR filter-based LIF neurons.
  • Impact: Demonstrated area and power-efficient SNN inference hardware with high accuracy and improved multiprocessor training throughput.
  • Tech: Python, Verilog, Vivado, FPGA.
  • Links: Papers: TCAS-AI 2024, ISCAS 2023.

EEG Signal-Based Seizure Prediction using CNNs

  • Role: Research Scholar at IIT Kharagpur (2021-2023).
  • Project: Developing CNN architectures for real-time seizure prediction from EEG signals.
  • Impact: Achieved high prediction accuracy with low-energy models suitable for wearable devices.
  • Tech: Python, TensorFlow, Keras, Vivado, FPGA.
  • Links: Papers: AICSP 2022, MWSCAS 2021.

Image Classification with Power-of-Two CNNs

  • Role: Research Scholar at IIT Kharagpur (2021-2023), Undergraduate Student at IIT Kharagpur (2016-2021).
  • Project: Developing CNN architectures with power-of-two weights for efficient FPGA implementation.
  • Impact: Achieved significant reductions in resource utilization and power consumption while maintaining accuracy.
  • Tech: Python, Verilog, Vivado, FPGA.
  • Links: Papers: TCAS-II 2023, MWSCAS 2019.

Keywords

Hardware-Aware Machine Learning · FPGA · Quantization · Spiking Neural Networks · In-Memory Compute · RFML · Digital Design