I’m currently reading a paper about using Kohonen neural networks for color quantization. I think i’ll implement median cut before starting implementing it. This way i’ll can’t compare it with something other than my current octree implementation.
Talking about the octrees! In the last post, the artifacts where caused by the fact that i didn’t merge the node with the least pixel count. In fact, i was merging the last created node. This was clearly visible for the 32 colors version of the RGB color wheel.
I added the pixels to the octree this way:
for(j=0; j<h; ++j) { for(i=0; i<w; ++i) { octree.add(pixel[i][j]); } }
The gradient is more detailed for pixel near the origins (the first pixel inserted). In the following image, the pixels are inserted in backward order (from (w,h) to (0,0)). We see that the gradient is inverted (just as we thought).
Here’s the result with pixels inserted in random order. The gradient artifact seems to have disappear. But the result still isn’t correct as we don’t respect the standard merge criteria.
The only thing to do is to sort the mergeable nodes according to their pixel count. As a list is stored for each octree level, a merge sort sounds the best solution. I’m using this implementation as i’m not using the STL. The gradient disappears for the RGB colorwheel, but the sorcerian image colors are still washed out…
And here’s the source code.