Patent Search Using Triplet Networks Based Fine-Tuned SciBERT

07/23/2022
by   Utku Umur Acikalin, et al.
0

In this paper, we propose a novel method for the prior-art search task. We fine-tune SciBERT transformer model using Triplet Network approach, allowing us to represent each patent with a fixed-size vector. This also enables us to conduct efficient vector similarity computations to rank patents in query time. In our experiments, we show that our proposed method outperforms baseline methods.

READ FULL TEXT

page 1

page 2

research
07/21/2020

problemConquero at SemEval-2020 Task 12: Transformer and Soft label-based approaches

In this paper, we present various systems submitted by our team problemC...
research
01/12/2022

PromptBERT: Improving BERT Sentence Embeddings with Prompts

The poor performance of the original BERT for sentence semantic similari...
research
11/07/2018

Learning acoustic word embeddings with phonetically associated triplet network

Previous researches on acoustic word embeddings used in query-by-example...
research
07/13/2021

Deep Ranking with Adaptive Margin Triplet Loss

We propose a simple modification from a fixed margin triplet loss to an ...
research
04/17/2021

ASBERT: Siamese and Triplet network embedding for open question answering

Answer selection (AS) is an essential subtask in the field of natural la...
research
10/09/2019

Active ordinal tuplewise querying for similarity learning

Many machine learning tasks such as clustering, classification, and data...
research
11/28/2022

On the Effectiveness of Parameter-Efficient Fine-Tuning

Fine-tuning pre-trained models has been ubiquitously proven to be effect...

Please sign up or login with your details

Forgot password? Click here to reset