Emojich – zero-shot emoji generation using Russian language: a technical report

12/04/2021
by   Alex Shonenkov, et al.
0

This technical report presents a text-to-image neural network "Emojich" that generates emojis using captions in Russian language as a condition. We aim to keep the generalization ability of a pretrained big model ruDALL-E Malevich (XL) 1.3B parameters at the fine-tuning stage, while giving special style to the images generated. Here are presented some engineering methods, code realization, all hyper-parameters for reproducing results and a Telegram bot where everyone can create their own customized sets of stickers. Also, some newly generated emojis obtained by "Emojich" model are demonstrated.

READ FULL TEXT

page 1

page 3

page 5

research
11/29/2021

Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic

Recent text-to-image matching models apply contrastive learning to large...
research
02/09/2023

Re-ViLM: Retrieval-Augmented Visual Language Model for Zero and Few-Shot Image Captioning

Augmenting pretrained language models (LMs) with a vision encoder (e.g.,...
research
05/30/2023

Resource-Efficient Fine-Tuning Strategies for Automatic MOS Prediction in Text-to-Speech for Low-Resource Languages

We train a MOS prediction model based on wav2vec 2.0 using the open-acce...
research
07/14/2021

HTLM: Hyper-Text Pre-Training and Prompting of Language Models

We introduce HTLM, a hyper-text language model trained on a large-scale ...
research
03/24/2023

VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining

Assessing the aesthetics of an image is challenging, as it is influenced...
research
02/22/2022

RuCLIP – new models and experiments: a technical report

In the report we propose six new implementations of ruCLIP model trained...
research
05/03/2019

Adaptive filter ordering in Spark

This report describes a technical methodology to render the Apache Spark...

Please sign up or login with your details

Forgot password? Click here to reset