- Parallel Processing of the HTK Commands

      Written by Bowon Lee
      Last updated: March 08, 2006


  • Introduction
    1. Training and testing speech models for automatic speech recognition requires vast amount of data as well as time for training the models. When there are multiple processors available, then executing training algorithms in parallel can significantly reduce the time.
      Some of the HTK commands can be executed in parallel. For example, when there are thousands of utterances, we can run 'HCopy' and 'HVite' in parallel because they process each utterance individually and the outcome for each utterance does not affect any other utterances in the same process. In this case, the parallel processing can simply by dividing the list of utterances into the number of available processors.
      This page presents some examples of running HTK commands in parallel without any modifications of the original commands. This is achieved by using a script language and a job queuing system on a Linux cluster.

  • Basic Procedure
    1. Parallel processing of HTK commands can be done in the following steps.

      1. Divide the list of data files
      2. Send the divided jobs to the processors
      3. Check for job completion (optional)
      4. Combine the results (optional)

      Dividing the list of data files (usually called 'scripts' in the HTK) can be achieved by using any script languages. Any job queuing system can be used for sending jobs and checking job completion. Checking job completion can also be achieved using a script language by examining the outputs of the HTK commands. Combining results can also be done by using any script languages after checking the job completion. Checking job completion or combining results may not be necessary depending on the commands.

  • Examples
    1. HCopy.pl
    2. The Perl script 'HCopy.pl' executes the HTK command 'HCopy' by dividing the number of data files into specified number of processors and sending the divided jobs to each of the processors. This script uses steps 1 through 3 described above. This Perl script is designed to work transparently, i.e., users can specify any options as if he runs the 'HCopy' command. The only difference to the user is the fact that it submits jobs to the processors instead of running the jobs.

    3. HVite.pl
    4. The Perl script 'HVite.pl' executes the HTK command 'HVite' in the same manners as 'HCopy.pl' does. This script uses steps 1 through 4 for parallel execution because the results from individual processor should be combined. When there are only a couple of data files and the user still wants to run the jobs in parallel, then he can first submit a series of jobs and then check for job completion. Please note that when the number of data files is below certain threshold (32 per processor in this script), then this script just submits one job and does not check for job completion. Checking for job completion should be implemented in the main routine which calls 'HVite.pl'.

    5. HERest.pl
    6. The Perl script 'HERest.pl' executes the HTK command 'HERest' in a similar way as 'HVite.pl'. The difference is that the statistics of each result should be combined such that the final statistics represent the statistics of the entire data files while the results in 'HVite.pl' can be simply concatenated to produce the final result. Therefore, the result is not exactly the same as the result obtained by running a job with a single processor.

    7. HResults.pl
    8. The HTK command 'HResults' does not take long time when the list of recognition result is not long. When there are thousands of recognition results, this may take a long time. Since the outcome of this command does not affect any subsequent processes and we can save time if we submit this as a single job to and do any following processes. So this script follows steps 1 through 2.

  • Conclusion
    1. Those examples use the Sun Grid Engine (SGE) as a job queuing system and they can be modified to be used with any other job queuing system. Also, if there is only one machine with multiple processors, such as dual CPU, dual core or dual CPU with dual core, then the user can modify those scripts to run 2 or 4 jobs at the same time.

      A tutorial about the job queuing using the SGE can be found here.

      Please send feedbacks to bowonlee@uiuc.edu

    Created by Bowon Lee Last updated 03/08/2006