Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
gyatso736 authored Nov 29, 2024
1 parent 077e01c commit 00e55f8
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,3 +46,6 @@ tokenize file with command: python NyimaTashi.py input_file input_file
3、Citation for Research Using the Tokenization System: "Research on Tibetan Word Segmentation Method Based on Bi-LSTM Combined with CRF — by Gesang Jiacuo et al."
4、The tokenization system is developed by the team of scholars led by Professor Nyima Tashi from Tibet University and team of scholars led by Professor Tong Xiao and Professor Jingbo Zhu from NorthEasten University. Its purpose is to provide convenience for individuals engaged in learning and researching Tibetan information processing. Please refrain from using it for illegal purposes.
5、Due to limitations in the corpus, there may be instances of word segmentation errors with unfamiliar words. We welcome any constructive suggestions for improvement. For inquiries, please contact us at: [email protected].

致谢:
特别感谢本人导师西藏大学尼玛扎西院士,特别感谢东北大学自然语言处理实验室肖桐教授和朱静波教授的指导和关照,感谢东大自然语言处理实验室李银桥学长、李北学长、景一学弟、郑童学弟等其他老师和同学的指导和关照,感谢阿卜杜热西提学长的合作。

0 comments on commit 00e55f8

Please sign in to comment.