Difference between revisions of "150308-Lantian Li"

From cslt Wiki
Jump to: navigation, search
(以“Weekly Summary 1. Make a series of d-vector-based experiments.(testing on sentence 2 and 7) 1). Comparison experiments on "Input data", including one text / two te...”为内容创建页面)
 
 
Line 13: Line 13:
 
2). last-hid-layer without sigmoid normalization < last-hid-layer with sigmoid normalization. (under the LDA condition and no matter which input data).
 
2). last-hid-layer without sigmoid normalization < last-hid-layer with sigmoid normalization. (under the LDA condition and no matter which input data).
  
2. To train a text-content-based neural networks and extract d-vectors from this network.   
+
2. To train a text-content-based neural networks and extract d-vectors from these networks.   
  
 
Next Week
 
Next Week
  
 
1. Go on the task1 and task2.
 
1. Go on the task1 and task2.

Latest revision as of 14:13, 9 March 2015

Weekly Summary

1. Make a series of d-vector-based experiments.(testing on sentence 2 and 7)

1). Comparison experiments on "Input data", including one text / two texts / 15 texts.

2). Comparison experiments on different hidden layers, last-hid-layer with sigmoid normalization and without sigmoid normalization.

The experimental results are that:(compared by the value of EER(%))

1). two texts < 15 texts < one text (especially under the LDA condition); The d-vector can be used in sudo speaker recognition.

2). last-hid-layer without sigmoid normalization < last-hid-layer with sigmoid normalization. (under the LDA condition and no matter which input data).

2. To train a text-content-based neural networks and extract d-vectors from these networks.

Next Week

1. Go on the task1 and task2.