MobIE: A German Dataset for Named Entity Recognition, Entity Linking and Relation Extraction in the Mobility Domain

08/16/2021 ∙ by Leonhard Hennig, et al. ∙ DFKI GmbH 0

We present MobIE, a German-language dataset, which is human-annotated with 20 coarse- and fine-grained entity types and entity linking information for geographically linkable entities. The dataset consists of 3,232 social media texts and traffic reports with 91K tokens, and contains 20.5K annotated entities, 13.1K of which are linked to a knowledge base. A subset of the dataset is human-annotated with seven mobility-related, n-ary relation types, while the remaining documents are annotated using a weakly-supervised labeling approach implemented with the Snorkel framework. To the best of our knowledge, this is the first German-language dataset that combines annotations for NER, EL and RE, and thus can be used for joint and multi-task learning of these fundamental information extraction tasks. We make MobIE public at https://github.com/dfki-nlp/mobie.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Named entity recognition (NER), entity linking (EL) and relation extraction (RE) are fundamental tasks in information extraction, and a key component in numerous downstream applications, such as question answering Yu et al. (2017) and knowledge base population Ji and Grishman (2011)

. Recent neural approaches based on pre-trained language models (e.g., BERT 

Devlin et al. (2019)) have shown impressive results for these tasks when fine-tuned on supervised datasets Akbik et al. (2018); De Cao et al. (2021); Alt et al. (2019). However, annotated datasets for fine-tuning information extraction models are still scarce, even in a comparatively well-resourced language such as German Benikova et al. (2014), and generally only contain annotations for a single task (e.g., for NER CoNLL’03 German Tjong Kim Sang and De Meulder (2003), GermEval 2014 Benikova et al. (2014); entity linking GerNED Ploch et al. (2012)). In addition, research in multi-task Ruder (2017) and joint learning Sui et al. (2020) has shown that models can benefit from exploiting training signals of related tasks. To the best of our knowledge, the work of Schiersch et al. (2018) is the only dataset for German that includes two of the three tasks, namely NER and RE, in a single dataset.

In this work, we present MobIE, a German-language information extraction dataset which has been fully annotated for NER, EL, and n-ary RE. The dataset is based upon a subset of documents provided by Schiersch et al. (2018), but focuses on the domain of mobility-related events, such as traffic obstructions and public transport issues. Figure 1 displays an example traffic report with a Canceled Route event. All relations in our dataset are n-ary, i.e. consist of two or more arguments, some of which are optional. Our work expands the dataset of Schiersch et al. (2018) with the following contributions:

  • We significantly extend the dataset with 1,686 annotated documents, more than doubling its size from 1,546 to 3,232 documents

  • We add entity linking annotations to geo-linkable entity types, with references to Open Street Map111https://www.openstreetmap.org/ identifiers, as well as geo-shapes

  • We implement an automatic labeling approach using the Snorkel framework Ratner et al. (2017) to obtain additional high quality, but weakly-supervised relation annotations

The dataset setup allows for training and evaluating algorithms that aim for fine-grained typing of geo-locations, entity linking of these, as well as for n-ary relation extraction. The final dataset contains entity, linking, and relation annotations.

Figure 1: Traffic report annotated with entity types, entity linking and arguments of a Canceled Route event

.

Figure 2: Geolinker: Annotation tool for entity linking

2 Data Collection and Annotation

2.1 Annotation Process

We collected German Twitter messages and RSS feeds based on a set of predefined search keywords and channels (radio stations, police and public transport providers) continuously from June 2015 to April 2019 using the crawlers and configurations provided by Schiersch et al. (2018), and randomly sampled documents from this set for annotation. The documents, including metadata, raw source texts, and annotations, are stored with a fixed document schema as AVRO222avro.apache.org and JSONL files, but can be trivially converted to standard formats such as CONLL. Each document was labeled iteratively, first for named entities and concepts, then for entity linking information, and finally for relations. For all manual annotations, documents are first annotated by a single trained annotator, and then the annotations are validated by a second expert. All annotations are labeled with their source, which e.g. allows to distinguish manual from weakly supervised relation annotations (see Section 2.4).

2.2 Entities

Table 3 lists entity types of the mobility domain that are annotated in our corpus. All entity types except for event_cause originate from the corpus of Schiersch et al. (2018). The main characteristics of the original annotation scheme are the usage of coarse- and fine-grained entity types (e.g., organization, organization-company, location, location-street), as well as trigger entities for phrases which indicate annotated relations, e.g., “Stau” (“traffic jam”). We introduce a minor change by adding a new entity type label event_cause, which serves as a label for concepts that do not explicitly trigger an event, but indicate its potential cause, e.g., “technische Störung” (“technical problem”) as a cause for a Delay event.

2.3 Entity Linking

In contrast to the original corpus, our dataset includes entity linking information. We use Open Street Map (OSM) as our main knowledge base (KB), since many of the geo-entities, such as streets and public transport routes, are not listed in standard KBs like Wikidata. We link all geo-locatable entities, i.e. organizations and locations, to their KB identifiers, and external identifiers (Wikidata) where possible. We include geo-information as an additional source of ground truth whenever a location is not available in OSM333This is mainly the case for location-route and location-stop entities, which are derived from proprietary KBs of Deutsche Bahn and Rhein-Main-Verkehrsverbund. Standardized ids for these entity types, e.g. DLID/DHID, were not yet available at the time of creation of this dataset.. Geo-information is provided as points and polygons in WKB format444https://www.ogc.org/standards/sfa.

Figure 2 shows the annotation tool used for entity linking. The tool displays the document’s text, lists all annotated geo-location entities along with their types, and a list of KB candidates retrieved. The annotator first checks the quality of the entity type annotation, and may label the entity as incorrect if applicable. Then, for each valid entity the annotator either labels one of the candidates shown on the map as correct, or they select missing if none of the candidates is correct.

2.4 Relations

Relation Arguments
Accident default-args, delay
Canceled Route default-args
Canceled Stop default-args, route
Delay default-args, delay
Obstruction default-args, delay
Rail Repl. Serv. default-args, delay
Traffic Jam default-args, delay, jam-length
Table 1: Relation definitions of the MobIE dataset. default-args for all relations are: location, trigger, direction, start-loc, end-loc, start-date, end-date, cause. Location and trigger are essential arguments for all relations, other arguments are optional.

Table 1 lists relation types and their arguments. The relation set focuses on events that may negatively impact traffic flow, such as Traffic Jams and Accidents. All relations have a set of required and optional arguments, and are labeled with their annotation source, i.e., human or weakly-supervised. Different relations may co-occur in a single sentence, e.g. Accidents may cause Traffic Jams, which are often reported together.

Human annotation. The annotation in Schiersch et al. (2018) is performed manually. Annotators labeled only explicitly expressed relations where all arguments occurred within a single sentence. The authors report an inter-annotator agreement of (Cohen’s ) for relations.

Automatic annotation with Snorkel.

To reduce the amount of labor required for relation annotation, we explored an automatic, weakly supervised labeling approach. Our intuition is that due to the formulaic nature of texts in the traffic report domain, weak heuristics that exploit the combination of trigger key phrases and specific location types provide a good signal for relation labeling. For example,

“A2 Dortmund Richtung Hannover 2 km Stau” is easily identified as a Traffic Jam relation mention due to the occurrence of the “Stau” trigger in combination with the road name “A2”.

Figure 3: Snorkel applies user-defined, ‘weak’ labeling functions (LF) to unlabeled data and learns a model to reweigh and combine the LFs’ outputs into probabilistic labels.

We use the Snorkel weak labeling framework Ratner et al. (2017). Snorkel unifies multiple weak supervision sources by modeling their correlations and dependencies, with the goal of reducing label noise Ratner et al. (2016)

. Weak supervision sources are expressed as labeling functions (LFs), and a label model combines the votes of all LFs weighted by their estimated accuracies and outputs a set of probabilistic labels (see Figure 

3).

We implement LFs for the relation classification of trigger concepts and role classification of trigger-argument concept pairs. The output is used to reconstruct n-ary relation annotations. Trigger classification LFs include keyword list checks as well as examining contextual entity types. Argument role classification LFs are inspired by Chen and Ji (2009), and include distance heuristics, entity type of the argument, event type output of the trigger labeling functions, context words of the argument candidate, and relative position of the entity to trigger. We trained the Snorkel label model on all unlabeled documents in the dataset that contained at least a trigger entity (690 documents). The probabilistic relation type and argument role labels were then combined into n-ary relation annotations.

We verified the performance of the Snorkel model using a randomly selected development subset of 55 documents with human-annotated relations. On this dev set, Snorkel-assigned trigger class labels achieved a F1-score of (Accuracy: ), and role labeling of trigger-argument pairs had a F1-score of (Accuracy: ). This confirms our intuition that for the traffic report domain, weak labeling functions can provide useful supervision signals.

3 Dataset Statistics

Twitter RSS Total
# docs 2,562 670 3,232
# sentences 5,409 1,668 7,077
# tokens 62,330 28,641 90,971
# entities 13,573 6,911 20,484
# linked 8,715 4,389 13,104
# events 1,461 575 2,036
Table 2: Dataset statistics per source

We report the statistics of the MobIE dataset in Table 2. The majority of documents originate from Twitter, but RSS messages are longer on average, and typically contain more annotations (e.g., entities/doc versus entities/doc for Twitter). The annotated corpus is provided with a standardized Train/Dev/Test split. To ensure a high data quality for evaluating event extraction, we include only documents with manually annotated events in the Test split.

Table 3 lists the distribution of entity annotations in the dataset, Table 4 the distribution of linked entities. Of the annotated entities covering 20 entity types, organization* and location* entities are linked, either to a KB reference id, or marked as NIL. The remaining entities are non-linkable types, such as time and date expressions. The fraction of NILs among linkable entities is % overall, but varies significantly with entity type. Locations that could not be assigned to a specific subtype are more often resolved as NIL. A large fraction of these are highway exits (e.g. “Pforzheim-Ost”) and non-German locations, which were not included in the subset of OSM integrated in our KB. In addition, candidate retrieval for organizations often returned no viable candidates, especially for non-canonical name variants used in tweets.

The dataset contains annotated traffic events, manually annotated and obtained via weak supervision. Table 5 shows the distribution of relation types. Canceled Stop and Rail Replacement Service relations occur less frequently in our data than the other relation types, and Obstruction is the most frequent class.

Twitter RSS Total
date 434 549 983
disaster-type 78 18 96
distance 37 175 212
duration 413 157 570
event-cause 898 116 1,014
location 887 1,074 1,961
location-city 844 1,098 1,942
location-route 2,298 324 2,622
location-stop 1,913 1,114 3,027
location-street 634 612 1,246
money 16 3 19
number 527 198 725
org-position 4 0 4
organization 296 121 417
organization-company 1,843 46 1,889
percent 1 0 1
person 135 0 135
set 18 37 55
time 683 410 1,093
trigger 1,614 859 2,473
Table 3: Distribution of entity annotations
# entities # KB # NIL
location 1,961 703 1,258
location-city 1,942 1,486 456
location-route 2,622 2,138 484
location-stop 3,027 1,898 1,129
location-street 1,246 1,036 210
organization 417 0 417
organization-company 1,889 192 1,697
Table 4: Distribution of entity linking annotations
Twitter RSS Total
Accident 316 80 396
Canceled Route 259 75 334
Canceled Stop 25 42 67
Delay 337 48 385
Obstruction 386 140 526
Rail Replacement Service 71 27 98
Traffic Jam 67 163 230
Table 5: Distribution of relation annotations

4 Conclusion

We presented a dataset for named entity recognition, entity linking and relation extraction in German mobility-related social media texts and traffic reports. Although not as large as some popular task-specific German datasets, the dataset is, to the best of our knowledge, the first German-language dataset that combines annotations for NER, EL and RE, and thus can be used for joint and multi-task learning of these fundamental information extraction tasks. The dataset is freely available under a CC-BY 4.0 license at https://github.com/dfki-nlp/mobie.

Acknowledgments

We would like to thank Elif Kara, Ursula Strohriegel and Tatjana Zeen for the annotation of the dataset. This work has been supported by the German Federal Ministry of Transport and Digital Infrastructure as part of the project DAYSTREAM (01MD19003E), and by the German Federal Ministry of Education and Research as part of the project CORA4NLP (01IW20010).

References