- LM count files still undelivered!
- sparse DNN: sticky training (retrain the nnet while keeping the sparsness)
zero small values(test set: 1900):
|without sticky: WER||7.55||7.60||7.62||7.66||7.72||7.87||9.46||53.23|
|with sticky: WER||7.55||7.57||7.60||7.60||7.63||7.64||8.35||9.51|
The conclusion is that with the L2 retrain, the DNN performance is largely called back. The extremely sparse case (th0.3) with sticky training seems quite amazing. This means the network could be sparse. However this is just for the 1900 test. Need test on other sets.
- fixed-point DNN forwarding
According to the fixed-point FST and NN, and the results of the sparse NN, we are working on fast NN decoder which is suitable for embedded device. The work is just started.
|old baseline||new baseline||DNN|
GPU & CPU merge
- on progress.
- HTK2Kaldi: hold.
- Kaldi2HTK: hold and second priority
The above work is probably not very necessary since Tencent will fully migrate to the hybrid DNN approach, and therefore HTK will be never used.
- check the reference, and change the compiling options
- the large-scale AM training based on the Tencent 400h data is done.
- the random output problem is fixed.
|Test Set||#utt||PS default||Tencent|
|cw||993||8.01(RT: 0.07)||7.61(RT: 0.40)|
|hfc||986||6.69(RT: 0.07)||5.48(RT: 0.40)|
|zz||984||12.73(RT: 0.07)||5.91(RT: 0.40)|
- To be done
- large scale parallel training.
- NN based engine(dynamic and static).