07 September, 2014

### IEEE float conversion with Erlang bit syntax

After finishing off the Hardware-Software Interface course on Coursera offered by the University of Washington, one thing that I still felt I did not have a good grasp over was how floating point numbers are represented in binary. The course textbook, by Bryant and O'Hallaron, helped me get a much better understanding of how it is done under the IEEE 754 specification.

I then also wrote a small program, making use of Erlang's binary pattern matching capabilities, which converts a bit string into a float:

````` %% K - number of bits to represent```exp```, %% N - number of bits to represent```frac`. %% | s | exp(k bits) | frac(n bits) | make_float_converter(K, N) -> Bias = math:pow(2, K-1) - 1, Denom = math:pow(2, N), Max_exp = round(math:pow(2, K) - 1),

``````fun(<<S:1, Exp:K, Frac:N>>) ->
case Exp of
0 -> E = 1 - Bias,              % denormalized
M = Frac / Denom,          % 1/2 + 1/4 + .. + 1/(2^N)

math:pow(-1, S) * M * math:pow(2, E);
Max_exp -> case Frac of         % special cases
0 -> case S of
0 -> pos_infinity;
1 -> neg_infinity
end;
_ -> nan              % not a number
end;
_ -> E = Exp - Bias,            % normalized
M = 1 + (Frac / Denom),
math:pow(-1, S) * M * math:pow(2, E)
end
end.
``````

```

Here, `make_float_converter` is a higher-order function that takes K- the number of bits you want to use to specify your exponent, and N- then number of bits you want to use to specify the fractional part of your float. It returns a `fun` that pattern-matches a binary according to that specification, and then converts that binary representation into a float.

Example usage of the function:

```erlang F = ieee:make_float_converter(2,2). F(<<2#01011>>). ```

05 August, 2014

### AI and Machine learning

Derrick Harris wrote a post on Gigaom about how the "threat of artificial intelligence" is being oversold. But I found that most of what the article seemed to be calling AI were really examples of machine learning. So I left a comment on the post, which I decided to post here as well. Please keep in mind that I myself am no expert in either machine learning or AI.

Machine learning is “trained” by feeding it a large volume of data. It uses that data to build a statistical model which it uses to predict whatever it was designed to predict. For example, based on movies that you’ve liked and the data of thousands of other people’s movie ratings it can predict with a certain degree of accuracy what movies you will like. Or, it could build models that analyze the sentiment of a certain piece of text, or recognize dog pictures.

Artificial intelligence, however, is something else entirely. The idea of AI is to build a machine that is sentient- a machine that is conscious of its existence. This of course raises interesting questions about intelligence and consciousness. And that is where the challenge lies with building artificial intelligence.

In my opinion, when and if we develop artificial intelligence it will be because of breakthroughs in our understanding of what intelligence is, not due to us collecting more data or processing the data better(although that might play a role).

There has been quite an uprising in machine learning research, and even jobs in the past couple of years, which has also helped spread the confusion between AI and machine learning. By contrast, there are very few efforts ongoing in “true AI”, the most notable is probably the research of Douglas Hofstadter. The Atlantic did an interesting piece on him and his research a while back.

20 July, 2014

### Book writeup: Makers by Chris Anderson

This weekend I started and finished reading Makers by Chris Anderson. The book is about how designing and manufacturing physical things is about to undergo the same revolution that the internet brought about to digital content and software. It was a short and fairly interesting read.

This was an exciting book to read, not just because it talks about how the future will look like but how things have already begun to change. 3D-printers and other "desktop"-manufacturing tools which allows anyone to manufacture any physical object. Crowdsourcing sites like Kickstarter has allowed bold, eccentric projects to take life while still allowing creators to stay in-charge, directing the creative path of the project. And all these "open-source hardware" projects like the Arduino and Raspberry Pi are starting to crop up. So, the book will have you easily convinced that change is indeed underway. And the book communicates the author's excitement about this pretty well. And it did indeed get me excited about what the future would look like.

One thing that did get annoying about the book was that it was quite repetitive about some of the "sound-bites". Still, the book was interesting enough, and I'd recommend it.

16 June, 2014

### Drawing Fractals with Elm

I just pushed a small, quick toy project on Github called elm-fractals to draw fractals using Elm. Elm itself is a really interesting functional reactive programming (FRP) language, but my project itself is not reactive. I chose Elm because I found Elm to be a fun idea and it makes drawing on the browser super-easy. I might write a blog post on Elm soon, once I get a better handle on it, but do check it out if you're interested- there is already a lot of cool stuff out there.

A fairly low-res Mandelbrot fractal drawn by the Elm code I wrote:

I started playing around with fractals, mainly the Julia fractal, after reading two amazing blog posts explaining complex numbers and how they form these intriguing patterns called fractals. The first one is by Steven Wittens and the second one is by Jeremy Kun.

Check the project out on Github: elm-fractals.

11 April, 2014

### Image diffing in Clojure

I recently wanted to try diffing two images ie. given two images show what is different in those two images. I'll explain the approach I took, which is really pretty inefficient, in this blog post.

The diff images produced:

First off, we'll be using `javax.imageio` to read the image and we'll create a "patch" in a `java.awt.image.BufferedImage` so import those two:

``````(ns imdiff.core
(:require [clojure.java.io :as io])
(:import javax.imageio.Image IO
java.awt.image.BufferedImage))
``````

And I've written some helper functions on top of these:

``````(defn read-image
[path]

(defn write-image
[img path file-format]
(ImageIO/write img file-format (io/file path)))
``````

Each pixel will give us an ARGB value: an alpha value which is the transparency of that pixel and the plain old red, green, blue components of the pixel:

``````(defn px->argb
[px]
(let [a (bit-and (bit-shift-right px 24) 0xff)
r (bit-and (bit-shift-right px 16) 0xff)
g (bit-and (bit-shift-right px 8) 0xff)
b (bit-and px 0xff)]
[a r g b]))

(defn argb->px
[[a r g b]]
(bit-or (bit-shift-left a 24)
(bit-or (bit-shift-left r 16)
(bit-or (bit-shift-left g 8)
b))))
``````

These functions were pulled from Nurullah Akkaya's blog post on Steganography.

Based on these functions, here is another helper functions we will be using while diffing:

``````(defn transparent-px
"Returns a pixel with transparency reduced"
[px transparency]
(let [[_ r g b] (px->argb px)]
(argb->px [(int (* transparency 128)) r g b])))
``````

## Color difference

Initially I was checking if two images were equal, however this yielded in some weird diffs where the images would be identical but because of small changes in the pixel value, would give a noisy diff. So, instead I'm now taking the Euclidean distance of two pixels and checking if they are close enough:

``````(defn color-difference
[pxA pxB]
(let [pxAargb (px->argb pxA)
pxBargb (px->argb pxB)]
(/ (Math/sqrt (apply + (map (comp #(Math/pow % 2) -) pxAargb pxBargb)))
510)))

(defn compare-colors
"Check if the `color difference` of two pixels is less than the
tolerance. Difference and tolerance can be any value between 0.0 and
1.0."
[pxA pxB tolerance]
(if (= 1.0 tolerance)
(= pxA pxB)
(< (color-difference pxA pxB) tolerance)))
``````

## Generating the diff

Finally, the diff is actually two files which show what is different in each of those images. I call these two output files "patches". I also assume for the sake of simplicity that the two files have the same dimensions:

``````(defn write-diff
"Write `patches` between imgA and imgB into the patch paths."
[imgA imgB patchA-path patchB-path & {:keys [tolerance transparency]
:or {tolerance 1.0
transparency 0}}]
(let [imgA-height (.getHeight imgA)
imgA-width (.getWidth imgA)
imgB-height (.getHeight imgB)
imgB-width (.getWidth imgB)

h imgA-height
w imgB-width

patchA (BufferedImage. w h BufferedImage/TYPE_INT_ARGB)
patchB (BufferedImage. w h BufferedImage/TYPE_INT_ARGB)]
(assert (and (= imgA-height imgB-height)
(= imgA-width imgB-width))
"Height and width need to be equal")
(doall (map deref
(for [row (range h)]
(future (doseq [col (range w)]
(let [a (.getRGB imgA col row)
b (.getRGB imgB col row)]
(if (util/compare-colors a b tolerance)
(do (.setRGB patchA col row (util/transparent-px a transparency))
(.setRGB patchB col row (util/transparent-px b transparency)))
(do (.setRGB patchA col row a)
(.setRGB patchB col row b)))))))))
(write-image patchA patchA-path "png")
(write-image patchB patchB-path "png")))
``````

As I mentioned above, this writes the pixels that are not the same in the two images into the patch files, whereas for pixels that are the same it writes the pixel with its opacity reduced.

Notice also, that the processing for each row of pixels happens in a `future`. I've noticed this to improve performance, especially for larger images- comparing two 2880×1800 PNG files which took ~47s without the parallelization took ~14s to complete.

You can try this function by first reading in images:

``````(def cat (read-image "cat_orig.png"))
``````(write-diff cat cat-red "A.png" "B.png"