Thursday, April 2, 2015

Day or Night?

I stumbled upon a really cool website for programming practice: HackerRank

[Disclaimer: I have no affiliation with them!]

I like HackerRank because they have compact questions that challenge pure topics in programming. There are contests but not like ChallengePost or TopCoder where competitions are coupled with a company trying to promote their newest tool/platform/etc. HackerRank has questions that fall within domains. Some of which are algorithms, machine learning and much more. You can write them in any language. Submit and wait for your scores. That's it. The fact that the scale of the problem is compact such that it can be resolved in 1 hour makes it good for interview practice. So check it out.

So with that said, I had a go at the machine learning question: determine if an image is night or day. That sounded fun. Try it yourself.

What I want this post to be is like a stream of consciousness of going about this problem. I think this will be interesting also because I Googled something along the lines of "determine day or night image" and nothing immediate came up!

So my first thought was: blue!

When a movie has a nighttime scene, oftentimes it is filmed during the day and given a blue tint to make it look like night. (see "Blue Tint" in this Wiki article). So my thinking was if I break down a picture and I saw more "blue", it must be a night time picture. (I will explain some flaws in just a bit).


Credit: Hejl. License. No changes.

So we know we can extract the RGB values from the pictures. In short, all colors in a picture is comprised of the combination of 3 fundamental colors: red, green, and blue. When we digitize it, we can quantify the amount of each using a value between 0 and 256, where 0 means there is not presence of the color and 256 means the color is fully present. So we able to identify how much "blue" is in a picture!

But how do we summarize the "blue-ness" of a picture? Should we sum the blue value in every pixel? No, that wouldn't be good: bigger pictures have more pixels and that would mean that they are more "night-like"? I think the average blue value would be a better summary. So I did that. And since this is a machine-learning challenge, I went finding pictures for a training set. Check out the set here!

I wrote some Python code to summarize the RGB and I've plotted them against each other:

Red Average vs. Green Average
Red Average vs. Blue Average
Green Average vs. Blue Average

Look at all the night points: they all clump towards the bottom left. All of them are low in value! Blue isn't a good metric. Going by what I said earlier, a giant picture of a blue flower in broad daylight would be consider a nighttime image! The better metric would be that all color values are low.

So what should I choose to classify these data points? Linear regression would not make sense here. I need to establish that photos with low RBG values are night time images. I chose the very simple Naive Bayes. (are all features really independent though? To be discussed...)

This project will be continually updated. I'll reveal the outcome of Naive Bayes next time. And then we'll tackle the proper way to do image recognition: convolutional networks.

No comments:

Post a Comment