services like rotten tomatoes are poor sources of recommendations for several reasons: they attempt to place a dense medium (film) onto a single taste dimension, they ignore the noise in the signal (reasons/biases a reviewer might have to misreport), and they cause false positives because of the tendency for people to assimilate taste to avoid cognitive load. i believe that one's friends are a representation of your personal tastes, and recommendations via people you know will be infinitely better than algorithmic/popular recommendations.
with the increased availability of rich media, the need to filter content down to a reasonable subset of things one will enjoy is increasing. people do this naturally when they recommend movies and tv shows to eachother, but the information is often lost because communication via language is lossy. if people are empowered with a tool to manage recommendations, the quality and effectiveness of those recommendations will increase. at the same time, the time wasted viewing media that is not enjoyable to the viewer should decrease.
words are an error prone form of communication. when you have an idea in your head that you want to convey to someone, you can describe that idea, but you can't guarantee that the idea will be reconstructed properly in the listeners head. in the last few decades, our ability to communicate ideas has become enriched by the introduction of new mediums. one example of this is the introduction of GIFs.
one variable that determines the effectiveness of a communication vector is the speed at which one can generate messages. while it has been clear in the last few years that people have wanted to begin communicating in a richer fashion using images, there is still the issue that it is tough to quickly create an image as a message when having a conversation with someone. this is the same reason that a whiteboard serves as an effective and often desired tool for complex communication.
the idea behind insert pic is to reduce the speed at which one can find an image to represent an idea to communicate, thereby expediting the adoption of communication via images and GIFs. insert pic is built to work with any textbox system wide (any place where communication happens), and has a complete hotkey triggered interface in order to minimize clicks/keystrokes required to express oneself with images rather than words.
easyjerk was inspired by a collaboration project between ryan trecartin and david karp river of the net. what they showed was that with a large enough corpus of media, content can be spliced together to remove the original narrative and generate a completely new one. this is particularly interesting because the content itself no longer controls the message that it communicates. rather, it is the organization of the content that contains the message, and can ultimately be decided by the viewer that controls that organization.
another interesting part of river of the net was that it makes explicit the fact that people have an incredible ability to interpret short clips of content in rapid succession. this shouldn't be surprising, this is essentially what one does when flipping through channels. it also introduces the question: can a rapid flux of content itself be what one intends to watch?
i was inspired to test this out with porn for several reasons. first, the narrative of pornography is often attacked for largely serving power fantasies. by chopping up the content, and playing it in rapid succession, i wanted to see if i could create a new narrative around it. second, based on analyzing porn traffic data, i had a theory that the bulk of users don't sit down to watch a single video from start to finish. rather, they manually create the "rapid flux" effect by clicking around across many videos on streaming sites. after completing this project, i decided to recreate the effect for a more broadly accepted source of content: see queue tv.
queue tv is the evolution of easyjerk. it dynamically creates a seamless stream of content focused on specific topics by pulling videos from youtube and vimeo. after building the prototype, i continued to iterate on the concept with a small group of beta testers and quickly learned that people wanted to be able to quickly and easily send videos to others as they sit back and watch.
what i learned from queue tv was that content can trigger you to think of people that would enjoy/understand/relate to/empathize with that content. if an interface can capture that spark, and provide user's with the tools necessary to turn that into action, one could build a media recommendation service that is better than what current machine learning algorithms are capable of. people's tendency towards pattern recognition and desire to share with others who recognize the same patterns ends up being the most accurate recommendation engine there is. it has the added benefit that when a recommendation is sent to you, you can enjoy it on a personal level, understanding that you have a shared idea between you and the recommender.
one of my favorite and most inspiring classes in college was algebra (often referred to as "abstract algebra" at other schools). one of the best visual/applied examples of group theory is the wallpaper group. i spent a lot of time reading about it in order to better understand group theory, and it became increasingly relevant as i started to work on more vector art.
as i learned to use adobe illustrator, i had the urge to create complicated vector patterns, and the only reason it was difficult was that the tools didn't exist. after prototyping a bunch of tools, i finally found a smooth way to create what i wanted. after completing madpattern, i created a series of high density vector patterns to show the power of how vector graphics can open up a new world of pattern design.
my biggest issue with algorithmic recommendation engines is that they often just surface some sort of popularity metric. this is inherently non-personal, and i have yet to see a recommendation engine that solves this. the closest i've seen was a startup called hunch (site no longer available since acquisition in 2011). their API allowed you to filter the space in which they drew insight from to a point where the recommendations became magically accurate.
around the same time that i was learning about hunch's API, ron feldman ran the idea by me of "a tool to help travelers find things that are similar to home". i knew that this sort of filtering was easy via hunch, so i whipped up a proof of concept. it turned out to be wildly accurate, and a really good tool for discovering restaurants while traveling. this led me to the idea of what movie should we see???. unfortunately, the hunch API was shut down as part of their acquisition.
every word in the dictionary is a registered domain. that leaves as available domain names: misspelled words, multiple words, alternative top level domains, and new words. as a pun and general wordplay lover, i wanted a tool that would help me come up with names of products/websites/etc. portmanteaus seemed a great choice, because they are less likely to already be registered, and they are a new word that can still roll off the tongue.
it turned out that none of the portmanteau generators online actually did it the right way though. the tools i found only generated portmanteaus based on the letters in the words, but to really do a good job, the tool needs to use phonemes. after finding the CMU Pronunciation Dictionary, i was able to generate a database that would allow me to search for phonetic matches to generate portmanteaus. i find word merge to be a great tool for name brainstorming, and still use it today.
one of my favorite driving games is "six degrees", where you choose two random actors/actresses and attempt to find a connection between them going from person to movie to person. it's a fun way to peg your brain trying to do a graph traversal in your head.
i created a web/js prototype version of this game so that i could play it with my friends when we weren't together. this was also a good introduction to the d3.js library, which i had been meaning to try out. the game went through several versions as APIs to serve movie posters, cast data, and actor photos were created and deprecated. there is currently a great and stable API that could work perfectly, but i have yet to integrate it: the movie db.
framethrower was a 2 year long software research startup focused on exploring the mathematical structure of stories and how they connect and evolve. the general approach was to create a graph database, and a UI that allowed for easy and quick sketching of a story. as a first pass, we applied the tool to movies, a natural choice as the entire team loved watching movies together, and the founder was the top movie reviewer on imdb.com at the time: tedg.
the UI allowed one to create objects and link them in a visual display of a directed graph. it also included a timeline, so one could take a movie, choose a region of the timeline, and sketch out the story that is occurring at that time. the user could then link that story, or a subset of it, to another scene annotated in another movie. the link itself could contain information describing the nature of the link. for example, two scenes, from two movies, where one director references another: our prime example of this was in "moulin rouge!", where there is a clear reference to a scene from the movie "being there". our goal was to allow this sort of deep annotation of stories, to allow for better understanding and communication of ideas, and more self-servingly, to allow for more intense and objective appreciation and discussion of what is actually occurring in movies.
the project eventually lost funding, which was extremely disheartening, but i learned a ton from the project. the source can be found here: matthandlersux/framethrower.
a collaboration with ben gleitzman, wikimaze is a trivia game in the style of "mindmaze" (a game embedded in encarta). wikimaze automatically generates trivia questions and answers using data from wikipedia, which means it is essentially a neverending source of trivia questions.
the idea behind starglue was to create a tool that made it easier for one to find out about new works from artists they love. it embraced a broad definition of the term "artist", and allowed for tracking of all sorts of content. the interface was similar to twitter, except the posts were things like "new album release", "gallery event nearby", "new movie from your favorite director", "new drawing posted", etc.
much of my time online is spent hunting down content from creators i love, and i wanted a tool that aggregated all of this in one place. i was tired of finding out too late that an artist i had previously marked as someone i like, had a local event where i could have seen the works in person. similarly, i found it increasingly hard to track when bands i like were in town, when my favorite actors/directors/etc had new movies out, etc. starglue was a solution to this, but unfortunately it shut down due to lack of funding. the idea lives on though, most recently in the way that queue works.
todos in code are often a sign of laziness, a crutch used in place of solving problems and being diligent about one's own codebase. at originate, we avoid todos, and request that pull requests with todos either address them before merging, or create a ticket to track the potential technical debt. what i've found is that this process was too laborious, and engineers would avoid marking things that deserve a todo, because they didn't want to break their concentration to go make a ticket.
tododrop aims to solve this as a happy medium. let developers put todos in their code, then make it easy for reviewers to decide if the todo should be addressed in the current pull request, or make it easy to convert the todo into an issue/ticket.
a similar project to restaurantology, this used the hunch API to narrow down the potential results when figuring out what movie you want to see, in theaters, right now. the results were suprisingly accurate, often the top ranked result was exactly what i would have picked on my own, but unfortunately i launched this about 2 weeks before the hunch acquisition and the API that i relied on was shut down.