Difference between revisions of "NLP Status Report 2016-11-21"

From cslt Wiki
Jump to: navigation, search
Line 24: Line 24:
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
 
* run rnng on MKL successfully, which can double or triple the speed.
 
* run rnng on MKL successfully, which can double or triple the speed.
* rerun the original model and get the final result
+
* rerun the original model and get the final result 92.32
 
* rerun the wrong memory models, still running
 
* rerun the wrong memory models, still running
* implement the dynamic memory model and get the result which is 0.22 better than baseline  
+
* implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline  
 
* try another structure of memory
 
* try another structure of memory
 
||
 
||

Revision as of 00:49, 21 November 2016

Date People Last Week This Week
2016/11/21 Yang Feng
Jiyuan Zhang
Andi Zhang
  • prepare new data set for paraphrase, wiped out repetition & most of the noises
  • run NMT on fr-en data set and new paraphrase set
  • read through source code to find ways to modify it
  • helped Guli with running NMT on our server
  • decide to drop theano or not
  • start to work on codes
Shiyue Zhang
  • run rnng on MKL successfully, which can double or triple the speed.
  • rerun the original model and get the final result 92.32
  • rerun the wrong memory models, still running
  • implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline
  • try another structure of memory
  • try more different models and summary the results
Guli