preview image
> NLP
Dependency Parsing
2024/07/07
211 words
1 mins

 

Dependency parsing involves identifying the grammatical structure of a sentence by extracting relationships between “head” words and their modifiers.

Penn Treebank

Models are evaluated using the Stanford Dependency conversion of the Penn Treebank, with metrics like POS accuracy, UAS, and LAS. The table below highlights key models and their performance.

ModelPOSUASLASPaper / SourceCode
Label Attention Layer + HPSG + XLNet97.397.4296.26PaperCode
Pre-training + XLNet-97.3095.92PaperCode
ACE + fine-tune-97.2095.80PaperCode

Universal Dependencies

Models in this task are evaluated on syntactic dependency parsing for multiple languages, adhering to the Universal Dependencies (UD) standard.

ModelLASMLASBLEXPaper / SourceCode
Stanford74.1662.0865.28PaperCode
UDPipe Future73.1161.2564.49PaperCode
HIT-SCIR75.8459.7865.33PaperCode

Cross-lingual Zero-shot Dependency Parsing

This task involves parsing sentences from one language without any labeled training trees for that language, using models evaluated against the Universal Dependency Treebank.

ModelUASLASPaper / SourceCode
XLM-R + SubDP79.6PaperCode
Cross-Lingual ELMo84.277.3PaperCode

Unsupervised Dependency Parsing

Unsupervised models infer dependency parses without labeled data and are often evaluated against the Penn Treebank.

ModelUASPaper / Source
Iterative reranking66.2Paper
Combined System64.4Paper