Stories in my pocket

They say a picture is worth a 1000 words, but wouldn’t be amazing if you can actually read those words?

One of my favorite things to do in New York City is people watching. After grabbing my coffee, I visit the park for a few minutes and watch people pass me by. Depending on there facial expression and walking pace, I create stories in my head on passersby.

With that idea, I created an app that can create stories from the user’s photo library. Stories in my pocket uses an AI that can recognize images and create stories using natural language processing.


How it works

Using NPL and image recognition Stories in my pocket creates narratives from the user’s photo library.



My role entailed creating the UX/UI and creating the prototype using Principle.
Duration: 2 weeks



Image recognition: A process in which the computer to understand the content of the picture. For example: recognizing a lion and a tiger. An easy human task, but very difficult for the computer.

Location: For the computer to know where the picture was taken. This can be seen in the iOS photo library.

NPL: Natural processing language is where computers try to understand the human language.

Share Project :