Difference between revisions of "NLP Status Report 2017-3-13"
From cslt Wiki
Line 27: | Line 27: | ||
|- | |- | ||
|Shiyue Zhang || | |Shiyue Zhang || | ||
− | + | * added trained memory-attention model to neural model(43.0) and got 2+ blue gain (45.19), but need more validation and improvement | |
+ | * ran baseline model on cs-en data, and found it was good on train set but poor on test set. | ||
+ | * ran baseline model on en-fr data, and found 'inf' problem. | ||
+ | * fixed the 'inf' problem by debugging the code of mask-added baseline model. | ||
+ | * running on cs-en and en-fr data again. | ||
|| | || | ||
− | + | * go on with baseline on big data: get results of cs-en and enfr data, train on zh-en data from [http://www.statmt.org/wmt17/translation-task.html#download WMT17] | |
+ | * go on to refine memory attention model: retrain to find out if the 2+ is just by chance, try more memory attention structure (relu, a(t-1), y(t-1)...) | ||
|- | |- | ||
|Peilun Xiao || | |Peilun Xiao || |
Revision as of 08:01, 13 March 2017
Date | People | Last Week | This Week |
---|---|---|---|
2017/1/3 | Yang Feng |
|
|
Jiyuan Zhang |
|
| |
Andi Zhang | |||
Shiyue Zhang |
|
| |
Peilun Xiao |