Web Content Filtering through knowledge distillation of Large Language Models

05/08/2023
by   Tamás Vörös, et al.
0

We introduce a state-of-the-art approach for URL categorization that leverages the power of Large Language Models (LLMs) to address the primary objectives of web content filtering: safeguarding organizations from legal and ethical risks, limiting access to high-risk or suspicious websites, and fostering a secure and professional work environment. Our method utilizes LLMs to generate accurate classifications and then employs established knowledge distillation techniques to create smaller, more specialized student models tailored for web content filtering. Distillation results in a student model with a 9 customer telemetry data collected by a large security vendor, into 30 distinct content categories based on their URLs, surpassing the current state-of-the-art approach. Our student model matches the performance of the teacher LLM with 175 times less parameters, allowing the model to be used for in-line scanning of large volumes of URLs, and requires 3 orders of magnitude less manually labeled training data than the current state-of-the-art approach. Depending on the specific use case, the output generated by our approach can either be directly returned or employed as a pre-filter for more resource-intensive operations involving website images or HTML.

READ FULL TEXT
research
05/20/2023

Accurate Knowledge Distillation with n-best Reranking

We propose extending the Sequence-level Knowledge Distillation (Kim and ...
research
07/21/2023

Distribution Shift Matters for Knowledge Distillation with Webly Collected Images

Knowledge distillation aims to learn a lightweight student network from ...
research
09/03/2019

Knowledge Distillation for End-to-EndPerson Search

We introduce knowledge distillation for end-to-end person search. End-to...
research
06/23/2023

GKD: Generalized Knowledge Distillation for Auto-regressive Sequence Models

Knowledge distillation is commonly used for compressing neural networks ...
research
10/24/2020

Pre-trained Summarization Distillation

Recent state-of-the-art approaches to summarization utilize large pre-tr...
research
05/22/2023

Let GPT be a Math Tutor: Teaching Math Word Problem Solvers with Customized Exercise Generation

In this paper, we present a novel approach for distilling math word prob...
research
06/08/2023

The economic trade-offs of large language models: A case study

Contacting customer service via chat is a common practice. Because emplo...

Please sign up or login with your details

Forgot password? Click here to reset