Dependency parsing involves identifying the grammatical structure of a sentence by extracting relationships between “head” words and their modifiers.
Models are evaluated using the Stanford Dependency conversion of the Penn Treebank, with metrics like POS accuracy, UAS, and LAS. The table below highlights key models and their performance.
Model | POS | UAS | LAS | Paper / Source | Code |
---|---|---|---|---|---|
Label Attention Layer + HPSG + XLNet | 97.3 | 97.42 | 96.26 | Paper | Code |
Pre-training + XLNet | - | 97.30 | 95.92 | Paper | Code |
ACE + fine-tune | - | 97.20 | 95.80 | Paper | Code |
Models in this task are evaluated on syntactic dependency parsing for multiple languages, adhering to the Universal Dependencies (UD) standard.
Model | LAS | MLAS | BLEX | Paper / Source | Code |
---|---|---|---|---|---|
Stanford | 74.16 | 62.08 | 65.28 | Paper | Code |
UDPipe Future | 73.11 | 61.25 | 64.49 | Paper | Code |
HIT-SCIR | 75.84 | 59.78 | 65.33 | Paper | Code |
This task involves parsing sentences from one language without any labeled training trees for that language, using models evaluated against the Universal Dependency Treebank.
Model | UAS | LAS | Paper / Source | Code |
---|---|---|---|---|
XLM-R + SubDP | — | 79.6 | Paper | Code |
Cross-Lingual ELMo | 84.2 | 77.3 | Paper | Code |
Unsupervised models infer dependency parses without labeled data and are often evaluated against the Penn Treebank.
Model | UAS | Paper / Source |
---|---|---|
Iterative reranking | 66.2 | Paper |
Combined System | 64.4 | Paper |