A webcam diary would be great for
lucid dreaming learning. I would love some features tailored to that. Here it is an important one (that actually is a set of features ;)
While recording or playing a video, you can pause and input a word/sentence clicking on the frame. The input is an input field with autocomplete, or you can switch directly selecting it from a list, sorted by occurrence. The word/sentence will be stored, associated to that video and to that time and (x,y) position in the video.
The entries of the list/autocomplete are words/sentences previously inserted in all the videos of a category (e.g.: dreams).
Immediately after you type a new word/sentence (and also later in another screen), you can mark it as "impossible" (default is unchecked).
When you select from the autocomplete/list an already existing word/sentence that is marked as "impossible", the app notifies it in some way (e.g. rendering it in a different color, or blinking, or whatever).
When you play a video, you see the words/sentences appearing for a few seconds at the positions they where created at.
When a word/sentence is marked as "impossible", it is rendered in a different way from the "possible" ones.
So, the app can show a list of the words/sentences, sorted by occurrence, and with filters (by typing chars, and selecting impossible yes/no).
Clicking on them, you get the list of dream videos that contain the selected word/sentence; clicking a video from this list, you jump playing it directly where the word/sentence is.
This would be powerful in making the unconscious mind learn some frequently dreamed "impossible" triggers (e.g. "pink elephant talking to me") that will help the dreaming self in realizing he/she is in a dream, and so starting the lucid experience. And also to learn some frequently dreamed "possible" triggers (e.g. "me running into a grass field") that could be used as hooks for reality checks into the dreams (e.g. jumping and flying, looking at your own hands, turning on/off light with a switch, reading a digital clock, etc.).
PS: storing the position of the words/sentences in the videos is useful because you could draw a picture, show it to the webcam, and then attach the words/sentences to the items on the picture; imagine e.g. if you draw an oneiric scenery that you can remember while logging your last dream, and there you sketch a shape with the pencil, and attach the sentence "pink elephant", and on the sky you attach "purple sky".
PPS: the autocomplete must work on every piece of the sentence; e.g. typing "sky" should make the "purple sky" sentence appear in the selection list.
On the same topic, if you're interested I've also proposed a web service idea
here.