CATS is a digital painting system that synthesizes texture from live video in real time. 

Ticha Sethapakdi, James McCann. “Painting with CATS: Camera-Aided Texture Synthesis”. In Proceedings of the 2019 ACM Conference on Human Factors in Computing Systems (CHI ‘19).


We present CATS, a digital painting system that synthesizes textures from live video in real-time, short-cutting the typical brush- and texture- gathering workflow. Through the use of boundary-aware texture synthesis, CATS produces strokes that are non-repeating and blend smoothly with each other. This allows CATS to produce paintings that would be difficult to create with traditional art supplies or existing software. We evaluated the effectiveness of CATS by asking artists to integrate the tool into their creative practice for two weeks; their paintings and feedback demonstrate that CATS is an expressive tool which can be used to create richly textured paintings.



CATS was implemented in C++ using modern OpenGL. Our texture synthesis solution leverages the framework of Lefebvre et al.'s approach, with refinements made in the style of PatchMatch.

Screen Shot 2019-01-18 at 5.18.00 PM.png

When you make a paint stroke, you can think of it as making a hole in the canvas. The goal is to make the inside of the hole look like the exemplar and the outside of the hole look like the canvas. A naive approach to doing this would be to just create a tiling pattern with the exemplar and paste the canvas on top of it. While this is definitely a valid approach, it creates obvious seams at the boundaries of the tiling makes the texture look 'unnatural'—which does not work well for the purpose of creating paintings that are artistic and aesthetically pleasing. So the alternative approach is to synthesize the exemplar texture inside the hole and paste the canvas on top

Screen Shot 2019-01-18 at 5.20.01 PM.png
Screen Shot 2019-01-18 at 5.23.50 PM.png

The latter approach produces more organic results that ‘look good’. This is because synthesized textures preserve local structure. If you take any small region of pixels (call this a 'patch') in the synthesized texture, you can always match it with some patch in the original exemplar texture. If we wanted to synthesize a texture, it is necessary to build a structure that keeps track of how those 'patches' should be arranged in the target (which is the synthesized result). This structure is called the approximate nearest neighbor field (or ANN). It works like a lookup table which tells you, for a given patch in the target, the 'most similar-looking' patch in the exemplar. We can use a 'divide and conquer' approach to gradually develop the ANN and refine the synthesized texture. In particular, we downsample the exemplar/target, solve those smaller cases, and then work our way back up in coarse-to-fine order.

At the coarsest/smallest level we start with a rough guess for the ANN, which is a simple tiling pattern of the exemplar. We can then use that initial guess to construct the (low resolution) target texture. Since the target changed, we have to update the nearest neighbors through refining the ANN.

Screen Shot 2019-01-18 at 5.40.02 PM.png
Screen Shot 2019-01-18 at 5.42.30 PM.png
Screen Shot 2019-01-18 at 5.44.13 PM.png

And now that we’ve updated the ANN, we have to refine the target. This circular process of using the ANN to refine the target and the target to refine the ANN repeats for a while, at progressively higher resolutions.

(please ignore the blue borders, there was a bug in the display code at the time these images were made)

(please ignore the blue borders, there was a bug in the display code at the time these images were made)