Recent progress in large language models (LLMs) like GPT-4 and PaLM-2 ha...
While recent advancements in vision-language models have revolutionized
...
How to efficiently transform large language models (LLMs) into instructi...
The popularity of Contrastive Language-Image Pre-training (CLIP) has
pro...
We present LLaMA-Adapter, a lightweight adaption method to efficiently
f...
One-to-one matching is a crucial design in DETR-like object detection
fr...
Learning a single static convolutional kernel in each convolutional laye...
The Transformer has been an indispensable staple in deep learning. Howev...
The recently proposed MaskFormer <cit.> gives a refreshed
perspective on...
Motivated by the success of Transformers in natural language processing ...
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compr...
One practice of employing deep neural networks is to apply the same
arch...
Batch Normalization (BN) was shown to accelerate training and improve
ge...
MobileNets, a class of top-performing convolutional neural network
archi...
Convolutional Neural Networks (CNNs) have become deeper and more complic...
It is well known that deep neural networks (DNNs) are vulnerable to
adve...
Network quantization is an effective solution to compress deep neural
ne...
This paper presents incremental network quantization (INQ), a novel meth...