Difference between revisions of "NLP Status Report 2016-11-28"

From cslt Wiki
Jump to: navigation, search
Line 4: Line 4:
 
| rowspan="5"|2016/11/28
 
| rowspan="5"|2016/11/28
 
|Yang Feng ||
 
|Yang Feng ||
 +
*rnng+MN: got the result of k-means method and the result is slightly worse; fixed the bug; analyzed the memory units and changed the calculation of similarity and reran.
 +
*sequence-to-sequence+MN: read the code and discuss with andy about the implementation details; checked Wikianswers data and found the answers are usually much longer than the question; read QA-related papers in proceedings of ACL and EMNLP and haven't found proper dataset yet.
 +
*Huilan's work: got a version of better result focusing on syntactical transformation.
 
||
 
||
 +
*rnng+MN: get the result with new similarity calculation.
 +
*revise the code of tensorflow to make it equivalent to theano's.
 +
*review the code of Jiyuan
 +
*Huilan's work: continue the work of adding syntactic information.
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||

Revision as of 07:21, 29 November 2016

Date People Last Week This Week
2016/11/28 Yang Feng
  • rnng+MN: got the result of k-means method and the result is slightly worse; fixed the bug; analyzed the memory units and changed the calculation of similarity and reran.
  • sequence-to-sequence+MN: read the code and discuss with andy about the implementation details; checked Wikianswers data and found the answers are usually much longer than the question; read QA-related papers in proceedings of ACL and EMNLP and haven't found proper dataset yet.
  • Huilan's work: got a version of better result focusing on syntactical transformation.
  • rnng+MN: get the result with new similarity calculation.
  • revise the code of tensorflow to make it equivalent to theano's.
  • review the code of Jiyuan
  • Huilan's work: continue the work of adding syntactic information.
Jiyuan Zhang
  • polished TRP [1]
  • improve poem model
Andi Zhang
  • deal with zh2en data set and ran them on NTM
  • had a small breakthrough about the code
  • get output of encoder to form memory
  • continue on the coding work of seq2seq with MemN2N
Shiyue Zhang
  • found a bug in my code and modified it.
  • tried memory with gate and found a big problem of memory.
  • reran previous models, the results are not better than baseline. [report]
  • reran the original model setting same seed, and got exactly same result.
  • published a TRP [2]
  • try to solve the problem of mem
Guli
  • busy on nothing for the first two days of the week.
  • modify the code and run NMT on fr-en data set
  • modify the code and run NMT on ch-uy data set
  • writing a survey about Chinese-uyghur MT