Showing posts with label documents. Show all posts
Showing posts with label documents. Show all posts
Friday, October 2, 2015
Neighborhood digital library
It would be great to have something like this, but for digital documents. The Plasma Poster is one approach, but I think the reason this works is because there's not much of value here (and no electrical components to maintain).
But how can we share digital documents at a completely analog kiosk? We need a way of pressing a link to a document (e.g., a QR-code) into paper. One approach might be a light activated stamp ink. That way you could display a QR code on your phone screen, make an impression in the stamp, then stamp a piece of paper that you could leave at the library. Another low-fi approach (from the kiosk's perspective anyway) would be deformable phone screens. The screen could then deform to match a QR code. With a piece of paper placed over the screen and a pencil you could then easily create a rubbing of the code.
Friday, September 25, 2015
Annotating longform docs on-the-go
I think wearable devices will see widespread adoption only when they can be operated away from phones. This is why I like the idea of wireless earbuds with onboard memory (such as the Bragi Dash). These devices would allow you to load up music or podcasts for a run or hike without having to worry about taking your phone with you (or hassling with wires or brittle Bluetooth connections). I actually never use music while I am on the trails (listening to music can be dangerous as it diminishes situational awareness), but I could see using them for podcasts when hiking through environments that don't hold my interest as well (like cities).
In fact, in that scenario, I would like to add a few features: text-to-speech, document layout hint injections, and annotations. The first two features are derived from SeeReader and are designed to allow one to convert a longform (written) piece into an audio document. The layout hint injections just mean that the system would read out not only the body text of the article, but also note when there is a figure that might be interesting. Obviously you wouldn't be able to look at it at the time, but in combination with an audio annotation feature you could "mark" parts of the document that you want to go back to later. So for example, the text might talk about the growth of fracking in northern CO and reference a map in the document that shows the appearance of drilling sites over time. Saying "mark" could create an annotation to that part of the document so I could check out the map when I'm back from my hike.
In fact, in that scenario, I would like to add a few features: text-to-speech, document layout hint injections, and annotations. The first two features are derived from SeeReader and are designed to allow one to convert a longform (written) piece into an audio document. The layout hint injections just mean that the system would read out not only the body text of the article, but also note when there is a figure that might be interesting. Obviously you wouldn't be able to look at it at the time, but in combination with an audio annotation feature you could "mark" parts of the document that you want to go back to later. So for example, the text might talk about the growth of fracking in northern CO and reference a map in the document that shows the appearance of drilling sites over time. Saying "mark" could create an annotation to that part of the document so I could check out the map when I'm back from my hike.
Subscribe to:
Posts (Atom)