Super-resolution imaging is at the heart of my research, where I focus on computational methods to overcome the diffraction limits of traditional optical microscopy. This field has transformative potential, enabling the visualization of cellular and molecular structures with unprecedented detail. I am particularly interested in developing deep learning-based solutions that efficiently upscale low-resolution microscopy images while preserving critical features and reducing artifacts.
My work emphasizes algorithmic solutions, making super-resolution imaging accessible without the need for expensive or complex hardware modifications. These innovations have the potential to enhance our understanding of cellular structures and molecular dynamics in fields like biology, materials science, and biophysics.
Research Highlights
Developing advanced deep learning models, including transformer-based architectures, to upscale low-resolution images and accurately reconstruct fine spatial details. (Dutta et al., 2025)
Exploring traditional techniques, such as interpolation and iterative back-projection, to benchmark performance against modern computational methods.
Applying these techniques to various microscopy modalities, such as fluorescence and confocal microscopy, with applications in cell biology, material science, and biophysics.
Evaluating the impact of super-resolution methods using quantitative metrics like SSIM and PSNR to ensure reliable and reproducible results.
References
2025
arXiv
State-of-the-Art Transformer Models for Image Super-Resolution: Techniques, Challenges, and Applications
Debasish Dutta, Deepjyoti Chetia, Neeharika Sonowal, and 1 more author
Image Super-Resolution (SR) aims to recover a high-resolution image from its low-resolution counterpart, which has been affected by a specific degradation process. This is achieved by enhancing detail and visual quality. Recent advancements in transformer-based methods have remolded image super-resolution by enabling high-quality reconstructions surpassing previous deep-learning approaches like CNN and GAN-based. This effectively addresses the limitations of previous methods, such as limited receptive fields, poor global context capture, and challenges in high-frequency detail recovery. Additionally, the paper reviews recent trends and advancements in transformer-based SR models, exploring various innovative techniques and architectures that combine transformers with traditional networks to balance global and local contexts. These neoteric methods are critically analyzed, revealing promising yet unexplored gaps and potential directions for future research. Several visualizations of models and techniques are included to foster a holistic understanding of recent trends. This work seeks to offer a structured roadmap for researchers at the forefront of deep learning, specifically exploring the impact of transformers on super-resolution techniques.
@article{dutta2025srtrans,title={State-of-the-Art Transformer Models for Image Super-Resolution: Techniques, Challenges, and Applications},author={Dutta, Debasish and Chetia, Deepjyoti and Sonowal, Neeharika and Kalita, Sanjib Kr},journal={arXiv preprint arXiv:2501.07855},year={2025},archiveprefix={arXiv},primaryclass={cs.CV},dimensions={true},keywords={arxiv, Single Image Super-Resolution (SR); Transformers; Vision Transformers (ViTs); Image Degradation and Enhancement; Self-Attention Mechanisms},doi={https://doi.org/10.48550/arXiv.2501.07855},url={https://arxiv.org/abs/2501.07855},}