Friday, September 25, 2015

Annotating longform docs on-the-go

I think wearable devices will see widespread adoption only when they can be operated away from phones. This is why I like the idea of wireless earbuds with onboard memory (such as the Bragi Dash). These devices would allow you to load up music or podcasts for a run or hike without having to worry about taking your phone with you (or hassling with wires or brittle Bluetooth connections). I actually never use music while I am on the trails (listening to music can be dangerous as it diminishes situational awareness), but I could see using them for podcasts when hiking through environments that don't hold my interest as well (like cities).

In fact, in that scenario, I would like to add a few features: text-to-speech, document layout hint injections, and annotations. The first two features are derived from SeeReader and are designed to allow one to convert a longform (written) piece into an audio document. The layout hint injections just mean that the system would read out not only the body text of the article, but also note when there is a figure that might be interesting. Obviously you wouldn't be able to look at it at the time, but in combination with an audio annotation feature you could "mark" parts of the document that you want to go back to later. So for example, the text might talk about the growth of fracking in northern CO and reference a map in the document that shows the appearance of drilling sites over time. Saying "mark" could create an annotation to that part of the document so I could check out the map when I'm back from my hike.

No comments: