Difference between revisions of "NLP Status Report 2017-5-15"

From cslt Wiki
Jump to: navigation, search
Line 21: Line 21:
 
|-
 
|-
 
|Shipan Ren ||
 
|Shipan Ren ||
 
+
* configured environment and ran tf_translate code
 +
* read machine translation paper
 +
* learned lstm model and seq2seq model
 
||
 
||
 
+
* learn the implement of seq2seq model
 +
* read tf_translate code
 +
* understand the meaning of main code
 
|}
 
|}

Revision as of 02:23, 15 May 2017

Date People Last Week This Week
2017/4/5 Jiyuan Zhang
Aodong LI
Shiyue Zhang
  • got result of M-NMT 28.92 ( +2.2, baseline=26.73)
  • trained word2vec on big zh,uy data
  • tested NMT baseline when there is UNK in ref, bleu=34.10 (better than MOSES=33.10), which means UNK is the biggest problem in NMT
  • found a problem in dataset, some sentences are reversed
  • test embedding untrained model
  • fix reversed sentences problem and rerun on MOSES, NMT, M-NMT
  • implement UNK model
Shipan Ren
  • configured environment and ran tf_translate code
  • read machine translation paper
  • learned lstm model and seq2seq model
  • learn the implement of seq2seq model
  • read tf_translate code
  • understand the meaning of main code