Difference between revisions of "NLP Status Report 2017-7-3"

From cslt Wiki
Jump to: navigation, search
 
Line 7: Line 7:
 
|-
 
|-
 
|Aodong LI ||
 
|Aodong LI ||
 
+
* Tried seq2seq with or without attention model to do style transfer (cross domain) task but this didn't work due to overfitting
 +
  seq2seq with attention model: Chinese-to-English
 +
  vanilla seq2seq model: English-to-English (Unsupervised)
 +
* Read two style controlled papers in generative model field
 +
* Trained seq2seq with style code model
 
||
 
||
 
+
* Understand the model and mechanism mentioned in the two related papers
 +
* Figure out new ways to do style transfer task
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  

Latest revision as of 04:07, 3 July 2017

Date People Last Week This Week
2017/7/3 Jiyuan Zhang
Aodong LI
  • Tried seq2seq with or without attention model to do style transfer (cross domain) task but this didn't work due to overfitting
 seq2seq with attention model: Chinese-to-English
 vanilla seq2seq model: English-to-English (Unsupervised)
  • Read two style controlled papers in generative model field
  • Trained seq2seq with style code model
  • Understand the model and mechanism mentioned in the two related papers
  • Figure out new ways to do style transfer task
Shiyue Zhang
Shipan Ren
  • read and run ViVi_NMT code
  • read the API of tensorflow
  • debugged ViVi_NMT and upgraded code version to tensorflow1.0
  • found the new version saves more time,has lower complexity and better bleu than before
  • test two versions of the code on small data sets (Chinese-English) and large data sets (Chinese-English) respectively
  • test two versions of the code on WMT 2014 English-to-German parallel dataset and WMT 2014 English-French dataset respectively
  • record experimental results
  • read paper and try to make the bleu become a little better