Conceptual art.
Go to file
Ash 16a88ddc1c Include images, plus cleanup 2022-12-07 18:58:10 -08:00
images Include images, plus cleanup 2022-12-07 18:58:10 -08:00
src Include images, plus cleanup 2022-12-07 18:58:10 -08:00
.envrc Initial commit 2022-12-07 16:12:13 -08:00
.gitignore Initial commit 2022-12-07 16:12:13 -08:00
Cargo.lock Include images, plus cleanup 2022-12-07 18:58:10 -08:00
Cargo.toml Include images, plus cleanup 2022-12-07 18:58:10 -08:00
LICENSE Include images, plus cleanup 2022-12-07 18:58:10 -08:00 Include images, plus cleanup 2022-12-07 18:58:10 -08:00
flake.lock Initial commit 2022-12-07 16:12:13 -08:00
flake.nix Initial commit 2022-12-07 16:12:13 -08:00

This is a five-part piece of conceptual art titled Derivative Works. Each image here is generated algorithmically using one (or more) pieces of other art as input, but bears no resemblance to its 'progenitors'. The intent is to question what it means for an image to be a 'derivative' of another, both in the legal sense of copyright law and in the moral sense.

For Perceptual, Sorted, Transpose, and Sample, the input image is Wikipedia's high-resolution image of the Mona Lisa. The inputs for Composite are the first 10,000 images in this subset of the Stable Diffusion image set.

All images in Derivative Works are public domain (specifically, CC0. I do request that you give credit if you make use of them. The source code and original-resolution PNGs are available on Codeberg.

Perceptual (Derivative Works 0x0)

Perceptual uses a perceptual hash of its input image and then applies a fairly simple generative algorithm to render the output (which is intended to vaguely recall a night ocean, but that's not the point). The use of a perceptual hash means that the output depends on what the image 'looks like'; changing a pixel here or there will still result in the same output.

Sorted (Derivative Works 0x1)

Sorted simply sorts the pixels. The output image has all the same pixels as the original, but going back to the input would be impossible since you have no way of knowing which pixels go where.

Transpose (Derivative Works 0x2)

Transpose interprets the image as a series of RGB values, then repeatedly applies a variation of the horseshoe map to them: take the first byte, then the last byte, then the second, then the second-to-last, and so on. Repeating this process four times produces something that is absolutely unrecognizable, yet still contains the same information as the input image. Furthermore, unlike Sort, the transformation is reversible; it's not hard to write a corresponding "unfold" transform that, when applied four times, would give the input image. So is this an unlawful derivative work (because it can be transformed into the original) or lawful (because it bears no visual resemblance)?

Sample (Derivative Works 0x3)

The Stable Diffusion model contains roughly 2.3 billion images. The model itself is about 5 gigabytes of storage. This implies that the model contains very little information about any particular image, from an information-theory point of view. A single pixel in a truecolor PNG contains four bytes of information: one each for red, green, blue, and alpha. Sample, which was generated by computing a 4-byte perceptual hash of its input image, is therefore a visual representation of roughly how much information Stable Diffusion has about it.

Composite (Derivative Works 0x4)

Composite takes the principle of Sample to its extreme. I downloaded roughly 10,000 images from the Stable Diffusion data set, computed a 4-byte hash of them, then concatenated the hashes together to produce the output. In the interest of saving time and effort, I interpret each bit as a pixel, either on or off. There are therefore 512 * 512 / 8 / 4 = 8192 images represented. (I downloaded more than 8192 to account for images that would fail to download or that no longer pointed to a valid image.)