150308-Lantian Li

From cslt Wiki
Revision as of 14:13, 9 March 2015 by Lilt (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Weekly Summary

1. Make a series of d-vector-based experiments.(testing on sentence 2 and 7)

1). Comparison experiments on "Input data", including one text / two texts / 15 texts.

2). Comparison experiments on different hidden layers, last-hid-layer with sigmoid normalization and without sigmoid normalization.

The experimental results are that:(compared by the value of EER(%))

1). two texts < 15 texts < one text (especially under the LDA condition); The d-vector can be used in sudo speaker recognition.

2). last-hid-layer without sigmoid normalization < last-hid-layer with sigmoid normalization. (under the LDA condition and no matter which input data).

2. To train a text-content-based neural networks and extract d-vectors from this network.

Next Week

1. Go on the task1 and task2.