As you may or may not know, we are working on a game engine here at donationcoder.com, while I won't go into the details of the engine, I'd like to tell you that mouser and I had a small discussion about accessibility in the engine before where we came up with the idea to attach a text field to each sound in the game. Something along the lines of:
class scSound
{
public:
scSound(std::string FileName="");
~scSound();
Play();
Stop();
Pause();
Preload();
EnableCaption(bool value);
SetCaptionVolumeAlpha(char value);
SetCaptionVolumeGamma(char value);
SetVolume(char volume);
SetSound(std::string Filename);
// ... etc,...
private:
std::string m_Filename;
std::string m_Caption;
}
Note that this is just roughly how I have it in my head and this is not a real class yet, since the engine is no where near the point where we can start implementing this. In the actual engine this will probably not even take the form as a c++ class since we have our own objects/agents in the game, but it helps to explain the idea,...
If captioning were enabled by the user, the caption strings would show along with the sounds as they are played, for the duration of the sound, or a custom duration, and thus supporting captioning from the very initial design up, by linking it at the lowest level with sounds. SetCaptionVolumeAlpha/Gamma could adjust the caption's opacity/intensity in relation to the volume of the sound being played, Eg: a value of 0 = no sensitivity, do not change gamma/opacity, a value of 255 = no volume is completely transparent/dark and full volume is completely opaque,light.
I'd be VERY interested to hear your ideas of how to take captioning to the next level, so we can adopt more accessibility support in the engine from the initial design up, and even though we are no where near having to worry about this stuff yet, it's good to know what will be needed, so we can keep it in the back of our heads while doing the very initial steps.