From cslt Wiki
Revision as of 05:31, 26 April 2013 by Liuc
- AM/lexicon/LM are shared.
- LM count files are still in transfering.
400 hour DNN training
|Test Set||Tencent Baseline||bMMI||fMMI||BN(with fMMI)||Hybrid|
- Tencent baseline is with 700h online data+ 700h 863 data, HLDA+MPE, 88k lexicon
- Our results are with 400 hour AM, 88k LM. ML+bMMI
Tencent test result
- AM: 70h training data(2 day, 15 machines, 10 threads)
- LM: 88k LM
- Test case: general
- gmmi-bmmi: 38.7%
- dnn-1: 28% 11 frame window, phone-based tree
- dnn-2: 34% 9 frame window, state-based tree
GPU & CPU merge
- Invesigate the possibility to merge GPU and CPU code. Try to find out an easier way. (1 week)
L-1 sparse initial training
- Start to investigating.
- HTK2Kaldi: the tool with Kaldi does not work.
- Kaldi2HTK: done with implementation. Testing?
- Some large performance (speed) degradation with the embedded platform(1/60).
- Planning for sparse DNN.
- QA LM training, still failed. Mengyuan need more work on this.