PS-Transformer: Learning Sparse Photometric Stereo Network using Self-Attention Mechanism

11/21/2022
by   Satoshi Ikehata, et al.
0

Existing deep calibrated photometric stereo networks basically aggregate observations under different lights based on the pre-defined operations such as linear projection and max pooling. While they are effective with the dense capture, simple first-order operations often fail to capture the high-order interactions among observations under small number of different lights. To tackle this issue, this paper presents a deep sparse calibrated photometric stereo network named PS-Transformer which leverages the learnable self-attention mechanism to properly capture the complex inter-image interactions. PS-Transformer builds upon the dual-branch design to explore both pixel-wise and image-wise features and individual feature is trained with the intermediate surface normal supervision to maximize geometric feasibility. A new synthetic dataset named CyclesPS+ is also presented with the comprehensive analysis to successfully train the photometric stereo networks. Extensive results on the publicly available benchmark datasets demonstrate that the surface normal prediction accuracy of the proposed method significantly outperforms other state-of-the-art algorithms with the same number of input images and is even comparable to that of dense algorithms which input 10× larger number of images.

READ FULL TEXT

page 1

page 4

page 9

research
08/20/2021

Smart Bird: Learnable Sparse Attention for Efficient and Effective Transformer

Transformer has achieved great success in NLP. However, the quadratic co...
research
08/15/2023

Learning Image Deraining Transformer Network with Dynamic Dual Self-Attention

Recently, Transformer-based architecture has been introduced into single...
research
07/26/2020

Deep Photometric Stereo for Non-Lambertian Surfaces

This paper addresses the problem of photometric stereo, in both calibrat...
research
07/25/2022

Deep Laparoscopic Stereo Matching with Transformers

The self-attention mechanism, successfully employed with the transformer...
research
12/16/2022

Deep Learning Methods for Calibrated Photometric Stereo and Beyond: A Survey

Photometric stereo recovers the surface normals of an object from multip...
research
06/08/2023

Sequence-to-Sequence Model with Transformer-based Attention Mechanism and Temporal Pooling for Non-Intrusive Load Monitoring

This paper presents a novel Sequence-to-Sequence (Seq2Seq) model based o...
research
09/19/2023

Multi-spectral Entropy Constrained Neural Compression of Solar Imagery

Missions studying the dynamic behaviour of the Sun are defined to captur...

Please sign up or login with your details

Forgot password? Click here to reset