Product
OverviewVideo​Graphic​Document​
Enterprise
Story
LETR / TECH noteNews / Notice​
Pricing
En
한국어English日本語日本語
User guide
Getting started
한국어English日本語
한국어English日本語
Ner's Present and Future: 02. Model structure and data set status
2024-07-04

‍

This post has been updated to match the latest trends as of 2023, so please refer to the article below.

NER's Present and Future Ver. 2: Korean NER Data Set Summary

‍

‍<NER의 현재와 미래> This content, which is the second theme in the series, is'Ner's Model structure and data set'We have prepared content about the first topic'From concepts to diverse conceptsIt seems from ', so if you haven't checked it out yet, I've read it first.

* Ner's Present and Future: 01. From concepts to diverse concepts Go see

‍

‍

‍Ner's model structure

According to the paper 'A Survey on Deep Learning for Named Entity Recognition', the structure of the NER model can be divided into a three-step process as shown in the figure below.

ㄹ
* Table source: https://arxiv.org/pdf/1812.09449.pdf

‍

(1) Distributed representations for input*

Pre-defined word embedding, character-level embedding, POS* tag, and gazetteer are used as layers that express input data as vectors, etc.

(2) Context Encoder

Models such as CNN*, RNN*, language model*, and Transformer* are used as layers to encode contextual information.

(3) Tag Decoder

Models such as Softmax, CRF*, RNN, and Point Network are used as layers to decode tag information.

 

However, not all models strictly follow the above structure. In particular, models on the deep learning side work end to end, so there are cases where the steps are not clearly split. However, if you include a traditional approach, you can easily consider the three steps above.

‍

* Distributed representations forInput: distributed representation of inputs
* POS (part-of-speech, part-of-speech), https://en.wikipedia.org/wiki/Part_of_speech
* CNN (Convolutional Neural Networks), https://en.wikipedia.org/wiki/Convolutional_neural_network
* RNN (Recurrent Neural Network), https://en.wikipedia.org/wiki/Recurrent_neural_network
* Language model, https://en.wikipedia.org/wiki/Language_model
* Transformers, https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)
* CRF (Conditional Random Field), https://en.wikipedia.org/wiki/Conditional_random_field
‍

‍

Status and performance evaluation of NERlibraries

‍

Currently, it is recommended to find an official NER library that is only in Korean, and you can find Korean in most models presented in multiple languages. Each library has the following topics:

‍

The evaluation was then carried out with a data set* distributed by Kaggle*. Since the number of classes in the data set and the number of classes in the library were all different, the task of matching each class to the class in the data set was discussed, and it was confirmed that the precision was lower in the library that could classify more classes the reference data set during this process. Judging, as criteria for judging NER performance, the judging used precision and f1-score using it were excluded, and library performance was assessed based only on judging rate and time required. The results are as significant:

It can be confirmed that Stanford NER Tagger explains less in the time it took (based on 1,000 reviews), and that flair and polyglot explains lower performance in terms of recall.

‍

* Kaggle, https://en.wikipedia.org/wiki/Kaggle
* https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus
* NLTK (Natural Language Toolkit), https://en.wikipedia.org/wiki/Natural_Language_Toolkit
* Stanford, https://nlp.stanford.edu/software/CRF-NER.html#Models
* SpaCy, https://en.wikipedia.org/wiki/SpaCy
* flair, https://github.com/flairNLP/flair
* Hugging Face, https://huggingface.co/datasets
* polyglot, https://polyglot.readthedocs.io/en/latest/ #
* deppavlov, https://github.com/deepmipt/DeepPavlov

‍

‍

Representative English NER data set

‍

(1) CoNll 2003 (Sang and Meulder,2003) *

: Copyright Policy - DUA

: 1,393 news articles in English (mostly sports-related)

: 4 types of annotated* entities — {LOC (location), ORG (organization), PER (person), MISC (miscellaneous)}

‍

* ConLL 2003, https://www.clips.uantwerpen.be/conll2003/ner/
* Annotated: with an <책 등이> annotation [note]

‍

(2) onToNotes 5.0 (Weischedel et al., 2013) *

: Copyright — LDC

: The types and numbers of data are as mentioned.

‍

* onToNotes 5.0, https://catalog.ldc.upenn.edu/LDC2013T19
* Pivot: Old Testament and New Testament text (Old Testament and New Testament text)
‍* Table source: https://catalog.ldc.upenn.edu/LDC2013T19

‍

: 18 types of annotated entities

* Table source: https://catalog.ldc.upenn.edu/docs/LDC2013T19/OntoNotes-Release-5.0.pdf

‍

(3) MUC-6 (Grishman and Sundheim,1996)

: Copyright Policy — LDC

: News article published from Wall Street Journal

: 3 types of Annotated Entities — {PER, LOC, ORG}

‍

* MUC-6, https://cs.nyu.edu/~grishman/muc6.html

 

(4) WNUT 17: Emerging and Rareentity Recognition (Derczynski et al., 2016)

: Copyright Policy — CC-BY 4.0

: social media (YouTube comments, Stack Overflow responses Twitter text and Reddit comments)

: 6 types of Annotated Entities — (Person, Location, Group, Creative Word, Corporation, Product)

‍

* WNUT 17, https://noisy-text.github.io/2017/emerging-rare-entities.html

‍

‍

Representative Korean NER data set

 

The number of NER data in Korea is very scarce. Currently, there are a total of three Korean NER data sets that have been released, and commercial use of all of them is restricted.

 

(1) National Institute of Korean Language NER data set

:Total 3,555 episodes

:Using BIO tagging system

:5 types of Annotated Entities — {Place (LC), Date (DT), Organization (OG), Time (TI), Person (PS)}

 

* The words of everyone at the National Institute of Korean Language, https://corpus.korean.go.kr

 

(2) Korea Maritime University Natural Language Processing Laboratory NER data set

:23,964 measured in total

:Using BIO tagging system

:10 types of Annotated Entities — {Person (PER), Organization (ORG), Place Name (LOC), Other (POH), Date (DAT), Time (TIM), Duration (DUR), Currency (MNY), Ratio (PNT), Other Annotated Expressions (NOH)}

 

* Korea Maritime University Natural Language Processing Laboratory on GitHub, https://github.com/kmounlp

 

(3) NAVER NLP CHALLENGE 2018

:Total 82, 393 episodes

:Using BIO tagging system

: 14 types of annotateEntities — {Person (PER), Field of Study (FLD), Artifact (AFW), Organization (ORG), Location (LOC), Civilization and Culture (CVL), Date (DAT), Time (TIM), Numbers (NUM), Developments and Events (EVT), Animals (ANM), Plants (PLT), Metals/Rocks/Chemicals (MAT), Medical Terms/IT Related Terms (TRM)}

 

* Naver NLP Challenge GitHub, https://github.com/naver/nlp-challenge

‍

‍

Until now 'Ner's present and future“The Second Topic in the Series About”Model structure and data set status“It was. THE THIRD TOPIC OF THIS SERIES WILL SOON BE 'Future development direction and goalsIt will lead to”.

‍

‍

‍

Ner's present and future

  • Ner's Present and Future: 01. From concepts to diverse concepts
  • Ner's Present and Future: 02. Model structure and data set status
  • Ner's Present and Future: 03. Future development direction and goals
  • ‍

    ‍

    🚀데이터 인텔리전스 플랫폼 '레터웍스' 지금 바로 경험해보세요.

    • 노트의 내용을 실제로 이용해 보세요! (한 달 무료 이용 가능 🎉)
    • AI 기술이 어떻게 적용되는지 궁금한가요? (POC 샘플 신청하기 💌)

    ‍

    View all blogs

    View featured notes

    LETR note
    Comparing Google Gemini and LETR WORKS Persona chatbots
    2024-12-19
    WORKS note
    All about persona chatbot: technology, usage, and LETR WORKS approach
    2024-12-16
    LETR note
    Paradigm innovation in content creation - the present and future of AI dubbing technology
    2024-12-12
    User Guide
    Partnership
    Twigfarm Co.,Ltd.
    Company registration number : 556-81-00254  |  Mail-order sales number : 2021- Seoul Jongno -1929
    CEO : Sunho Baek  |  Personal information manager : Hyuntaek Park
    Seoul head office : (03187) 6F, 6,Jong-ro, Jongno-gu,Seoul, Republic of Korea
    Gwangju branch : (61472) 203,193-22, Geumnam-ro,Dong-gu,Gwangju, Republic of Korea
    Singapore asia office : (048581) 16 RAFFLES QUAY #33-07 HONG LEONG BUILDING SINGAPORE
    Family site
    TwigfarmLETR LABSheybunny
    Terms of use
    |
    Privacy policy
    ⓒ 2024 LETR WORKS. All rights reserved.