摘要
In this paper, we propose a method that exploits full parsing information by representing it as features of argument classification models and as constraints in integer linear learning programs. In addition, to take advantage of SVM-based and Maximum Entropy-based argument classification models, we incorporate their scoring matrices, and use the combined matrix in the above-mentioned integer linear programs. The experimental results show that full parsing information not only increases the F-score of argument classification models by 0.7%, but also effectively removes all labeling inconsistencies, which increases the F-score by 0.64%. The ensemble of SVM and ME also boosts the F-score by 0.77%. Our system achieves an F-score of 76.53% in the development set and 76.38% in Test WSJ.
原文 | ???core.languages.en_GB??? |
---|---|
頁面 | 233-236 |
頁數 | 4 |
DOIs | |
出版狀態 | 已出版 - 2005 |
事件 | 9th Conference on Computational Natural Language Learning, CoNLL 2005 - Ann Arbor, MI, United States 持續時間: 29 6月 2005 → 30 6月 2005 |
???event.eventtypes.event.conference???
???event.eventtypes.event.conference??? | 9th Conference on Computational Natural Language Learning, CoNLL 2005 |
---|---|
國家/地區 | United States |
城市 | Ann Arbor, MI |
期間 | 29/06/05 → 30/06/05 |