Lantian Li 14-12-01

From cslt Wiki
Revision as of 11:21, 1 December 2014 by Lilt (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Weekly Summary

1. Compare the performance between SVM and MLR, and the result is that MLR is worse than SVM.

I think there are two reasons. 1/ the training dataset is small.

2/ This issue based on GMM-UBM is not applied to complex non-linear model.

2. Compute the training accuarcy. For true speaker, the training accuray is about 4%, and for imp speaker, it is about 1%.

The EER is 2%. So there exists a difference between the true traning accuracy and imp training accuracy.

Now I still don't know whether to need to adjust the training dataset.

3. Help Jun Wang test the performance of PLDA-based classifier, results is baseline < SVM < DNN.

So I learn DNN method from him.

Next Week

1. Continue to look for distinguishing characteristics

1) Improve K-means algorithm.

2) Implement the UBM segmentation score method.

3) Add original GMM score to feature vector.