| Posted: January 13 2004 at 7:33pm | IP Logged
|
|
|
Well, there's a published standard that says the minimum display duration for any caption should be 2 seconds. It goes on to recommend a word-per-minute rate of about 160 wpm for middle schoolers, up to 240 wpm or so for average adult readers
While that's great guidance if you're writing your own captions, it's a bit less helpful if you're encoding captions from existing material. So what I did was combine captions that fall within a desired 'minimum display' period into a single caption. When I find a caption that's outside that period, I write out the combined caption as a single script that is displayed all at once at the original start time of the first caption. The process begins all over again with the first caption outside the time period.
The net effect is that the longer combined captions are displayed for at least the minimum specified period. The next caption is displayed at its original start time, so there's no cumulative time lag. You do lose a bit of sychronicity with the audio track, and that will probably be disconcerting to a viewer with normal hearing who watches the cc version. But a hearing-impaired individual will find the result to be much more usable. Since that was my goal, I'm pretty happy with the compromise. Hope my customer is too. 
By the way, I installed v2.2 of the driver and captured SAMI files too, and really liked the results (they were actually very similar to those my program produces). But that approach would require capturing each tape twice, and then do much of the manual processing my approach requires. The cost of that approach is too high for my application.
|