Difference between revisions of "Ling Luo 2015-08-31"

From cslt Wiki
Jump to: navigation, search
(Works in this week:)
(Works in the past:)
Line 4: Line 4:
  
 
1.''Finish training word embeddings via 5 models :''
 
1.''Finish training word embeddings via 5 models :''
 +
 
using EnWiki dataset(953M):
 
using EnWiki dataset(953M):
 +
 
CBOW,Skip-Gram
 
CBOW,Skip-Gram
 +
 
using text8 dataset(95.3M):
 
using text8 dataset(95.3M):
 +
 
CBOW,Skip-Gram,C&W,GloVe,LBL and Order(count-based)
 
CBOW,Skip-Gram,C&W,GloVe,LBL and Order(count-based)
  
 
2.''Use tasks to measure quality of the word vectors with various dimensions(10~200):''
 
2.''Use tasks to measure quality of the word vectors with various dimensions(10~200):''
 +
 
word similarity(ws)
 
word similarity(ws)
 +
 
the TOEFL set:small dataset
 
the TOEFL set:small dataset
 +
 
analogy task:9K semantic and 10.5K syntactic analogy questions
 
analogy task:9K semantic and 10.5K syntactic analogy questions
 +
 
text classification:IMDB dataset——pos&neg,use unlabeled dataset to train word embeddings
 
text classification:IMDB dataset——pos&neg,use unlabeled dataset to train word embeddings
 +
 
sentence-level sentiment classification (based on convolutional neural networks)
 
sentence-level sentiment classification (based on convolutional neural networks)
part-of-speech tagging
 
  
 +
part-of-speech tagging
  
 
== Works in this week: ==
 
== Works in this week: ==

Revision as of 02:19, 2 September 2015

Works in the past:

1.Finish training word embeddings via 5 models :

using EnWiki dataset(953M):

CBOW,Skip-Gram

using text8 dataset(95.3M):

CBOW,Skip-Gram,C&W,GloVe,LBL and Order(count-based)

2.Use tasks to measure quality of the word vectors with various dimensions(10~200):

word similarity(ws)

the TOEFL set:small dataset

analogy task:9K semantic and 10.5K syntactic analogy questions

text classification:IMDB dataset——pos&neg,use unlabeled dataset to train word embeddings

sentence-level sentiment classification (based on convolutional neural networks)

part-of-speech tagging

Works in this week:

word similarity(ws): try to use different similarity calculation method

named entity recognition(ner)

focus on cnn