POViT: Vision Transformer for Multi-objective Design and Characterization of Nanophotonic Devices

05/17/2022
by   Xinyu Chen, et al.
0

We solve a fundamental challenge in semiconductor IC design: the fast and accurate characterization of nanoscale photonic devices. Much like the fusion between AI and EDA, many efforts have been made to apply DNNs such as convolutional neural networks (CNN) to prototype and characterize next-gen optoelectronic devices commonly found in photonic integrated circuits (PIC) and LiDAR. These prior works generally strive to predict the quality factor (Q) and modal volume (V) of for instance, photonic crystals, with ultra-high accuracy and speed. However, state-of-the-art models are still far from being directly applicable in the real-world: e.g. the correlation coefficient of V (V_coeff ) is only about 80 generate reliable and reproducible nanophotonic designs. Recently, attention-based transformer models have attracted extensive interests and been widely used in CV and NLP. In this work, we propose the first-ever Transformer model (POViT) to efficiently design and simulate semiconductor photonic devices with multiple objectives. Unlike the standard Vision Transformer (ViT), we supplied photonic crystals as data input and changed the activation layer from GELU to an absolute-value function (ABS). Our experiments show that POViT exceeds results reported by previous models significantly. The correlation coefficient V_coeff increases by over 12 prediction errors of Q is reduced by an order of magnitude, among several other key metric improvements. Our work has the potential to drive the expansion of EDA to fully automated photonic design. The complete dataset and code will be released to aid researchers endeavoring in the interdisciplinary field of physics and computer science.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 7

page 8

page 10

page 11

research
03/05/2023

Estimating Extreme 3D Image Rotation with Transformer Cross-Attention

The estimation of large and extreme image rotation plays a key role in m...
research
05/15/2021

Are Convolutional Neural Networks or Transformers more like human vision?

Modern machine learning models for computer vision exceed humans in accu...
research
03/16/2022

WegFormer: Transformers for Weakly Supervised Semantic Segmentation

Although convolutional neural networks (CNNs) have achieved remarkable p...
research
09/28/2021

Fine-tuning Vision Transformers for the Prediction of State Variables in Ising Models

Transformers are state-of-the-art deep learning models that are composed...
research
12/30/2020

Transformer for Image Quality Assessment

Transformer has become the new standard method in natural language proce...
research
12/27/2021

MSHT: Multi-stage Hybrid Transformer for the ROSE Image Analysis of Pancreatic Cancer

Pancreatic cancer is one of the most malignant cancers in the world, whi...

Please sign up or login with your details

Forgot password? Click here to reset