Difference between revisions of "NLP Status Report 2017-1-10"

From cslt Wiki
Jump to: navigation, search
Line 14: Line 14:
|Jiyuan Zhang ||
|Jiyuan Zhang ||
*improved speed of prediction process
*designed the questionnaire, then discovered the problem and redesigned
*ran expriments:<br/>
two sytles expriments of top1_memory_model<br/>
overfitting expriments of top1_memory_model<br/>
two styles expriments of average_memory_model<br/>
overfitting expirments of average_memory_model
*improve poem model  
*complete the questionnaire  
|Andi Zhang ||
|Andi Zhang ||

Revision as of 05:50, 10 January 2017

Date People Last Week This Week
2017/1/3 Yang Feng
  • nmt+mn: tried to improve the nmt baseline;
  • met with problems for baseline, rulling out the factor of output order and file format and got the reason of learning rate.
  • read the code of Andy's;
  • wrote the code for bleu evaluation;
  • managed to fix the code of nmt+mn;
  • ran experiments [report]
Jiyuan Zhang
  • designed the questionnaire, then discovered the problem and redesigned
  • complete the questionnaire
Andi Zhang
  • helped Jiyuan dealing with poem questionnaires
  • continue this work, may start collecting feedback if all questionnaires are filled out
Shiyue Zhang
  • finished running theano baseline
  • read and understood the beam search in theano baseline
  • started to write the Dynet Chinese Document
  • started to implement beam search to tensorflow baseline
  • finished the beam search implementation
  • go on with Dynet Chinese Document
  • run nmt with monolingual data
  • bleu computation
  • learn about tensorflow
  • improve my paper
  • analyze experiment results
Peilun Xiao
  • learned tf-idf algorithm
  • coded tf-idf alogrithm in python,but found it not worked well
  • tried to use small dataset to test the program
  • use sklearn tfidf to test the dataset