ABHIJAY GHILDYAL
I am a 6th-year CS Ph.D. student in the Computer Graphics and Vision Lab at Portland State
University. I'm fortunate to be advised by Dr. Feng
Liu.
My research interests lie broadly in computational visual perception, novel view
synthesis, machine learning safety, adversarial robustness, robotics, and precision agriculture.
In the summer of 2023, I was an intern in the Imaging Science team at Amazon.
I am currently seeking job opportunities as a
Computer Vision Research Scientist/Engineer.
Want to know more or connect? Check out these links:
|
Research
Foundation Models Boost Low-Level Perceptual Similarity Metrics
Abhijay Ghildyal,
Nabajeet Barman,
Saman Zadtootaghaj
Under Review
arXiv
 / 
code
 / 
bibtex
Previous perceptual similarity models using foundation models focus on the final layer or embedding.
In contrast, this work investigates the use of intermediate features, which remain largely
unexplored in low-level perceptual similarity metrics. We show that intermediate features are more
effective and, by applying feature distance measures without requiring training (zero-shot), can
outperform existing metrics.
|
|
LAR-IQA: A Lightweight, Accurate, and Robust No-Reference Image Quality Assessment Model
Nasim J. Avanaki,
Abhijay Ghildyal,
Nabajeet Barman,
Saman Zadtootaghaj
Advances in Image Manipulation Workshop at European Conference on Computer Vision (ECCV), 2024
arXiv
 / 
code
 / 
bibtex
We developed a lightweight No-Reference Image Quality Assessment (NR-IQA) model. It uses a
dual-branch architecture, with one branch trained on synthetically distorted images and the other on
authentically distorted images, improving generalizability across distortion types. It is a compact,
lightweight NR-IQA model that achieves SOTA performance on ECCV AIM UHD-IQA challenge validation
and test datasets while being nearly 5.7 times faster than the fastest SOTA model.
|
|
Attacking Perceptual Similarity Metrics
Abhijay Ghildyal,
Feng Liu
Transactions on Machine Learning Research (TMLR), 2023
Featured Certification (Spotlight 🌟 or top ~0.01% of the
accepted papers)
arXiv
 / 
code
 / 
OpenReview
 / 
bibtex
In this study, we systematically examine the robustness of both traditional and learned perceptual
similarity metrics to imperceptible adversarial perturbations.
|
|
A Perceptual Quality Metric for Video Frame Interpolation
Qiqi Hou,
Abhijay Ghildyal,
Feng Liu
European Conference on Computer Vision (ECCV), 2022
arXiv
 / 
code
 / 
video
 / 
bibtex
We developed a perceptual quality metric for measuring video frame interpolation results. Our method
learns perceptual features directly from videos instead of individual frames.
|
|
Shift-tolerant Perceptual Similarity Metric
Abhijay Ghildyal,
Feng Liu
European Conference on Computer Vision (ECCV), 2022
arXiv
 / 
code
 / 
video
 / 
IQA-PyTorch
 / 
bibtex
We investigated a broad range of neural network elements and developed a robust perceptual
similarity metric. Our shift-tolerant perceptual similarity metric (ST-LPIPS) is consistent with
human perception and is less susceptible to imperceptible misalignments between two images than
existing metrics.
|
Experience
Services
- Conference and Journal Reviewer
- ACMMM (2022)
- TMLR (2023)
- WACV (2024, 2025)
- TPAMI (2024)
- ECCV (2024)
- AAAI (2025)
- Workshop Reviewer
- WiCV (CVPR 2023, CVPR 2024, ECCV 2024)
- AI4VA: AI for Visual Arts Workshop and Challenges (ECCV 2024)
- Out Of Distribution Generalization in Computer Vision (ECCV 2024)
|
Awards
- Richard Kieburtz Memorial Graduate Fellowship
|
Teaching
- Teaching Assistant
- CS 441/541 Artificial Intelligence (Fall'19,Winter'20)
- CS 445/545 Machine Learning (Spring'21)
- CS 447/547 Computer Graphics (Fall'21)
|
|