Perceptually uniform color space made by MLP.
Euclidean distance in this space will be perceptually uniform.
Previously suggested color spaces were hand-crafted. We tried a new approach with machine learning. Here, we provide two selected example models we made as shown below. You can see them in 3D interactive viewer here Colorspace1, Colorspace2. In the following description, the second model will be mainly described. Since neural network models depend on its random initial parameters and training procedure, one can surely make a better color space.
CIE2000 Delta-E, currently one of the most widely used color difference metric, is not perfect. Two color pairs above have same RGB euclidean distance and look same to human eye, but the metric's difference is very huge. In our color space, there isn't much difference in the Euclidean distances of two color pairs.
You can see more gradients in "gradient" folder of each color space.
You can generate palettes of evenly distinct colors in perceptually uniform space. We used force vector algorithm from IWantHue (code). You can also see the result in 3D.
ex) Generate a palette of 20 colors.
Data is collected from https://colors-82cc6.web.app/. Data is a array-like container of (answer, color1, color2).
ex) [[3, (50,120,38), (85,200,5)],
[0, (220,52,135), (200,83,121)],
... ]
The training goal is to make the Euclidean distance of the color pair in the desired color space follow the human eye response (percieved difference from 0 to 5). You can use L1 or L2 losses.
Architecure of one block. During experiment, MLP often transformed a color space(initially a cube shape RGB space) into a line or a plane. In order to ensure the space to keep its three-dimensional convex shape, we need to reduce the dimension back to three. Thus, we used an unit of a block which has the batchnormalization layer to make the X,Y,Z coordinates spread. Due to the batchnormalization, we have to use the whole batch per one epoch. In addition, to convert a RGB color to our space, you can't just pass the value to the model. You have to calculate for evey possible colors in advance at once, save the result (into csv), and search the mapping from the csv.
Overall architecture of a model. The model is linked in a row with several consecutive blocks.
Our models both have hidden dimension of 100, 10 blocks and use Softplus as an activation function. We tried many but Softplus works best.
- model.py : MLP model definition
- train.py : Train the MLP model
- colorspace_to_csv.py : After training, you choose a model, convert its colorspace into csv file. Since MLP does not have an inverse function and the model has batchnromalization layer, it is impossible to convert only one color point from one space to another. We have to calculate the mapped points for entire color range in advance and sava the result to csv file.
- utils.py : Functions for color jobs including color conversion, inversion, get distance, set new axis (lightness).
- gradient.py : Plot the color gradient.
- 3d_visualization.py: Show the color space in 3D interactive tool. After run this .py file, open a downloaded .html file.
- palette_generator.py: Randomly generates distinct color palettes in our color space. You can also see the result in 3D visualization.
- cross_section.py : Plot the cross section of color space in different axes.
Use
- Make the model and train. Train process saves the plot of the color space and the parameter at every 200 epochs in "model/{modelname}" directory.
- Select one model and pass the path of checkpoint to colorspace_to_csv.py. It saves the color space in a csv file.
- In utils.py, set right name and csv file path of the model you want to use at the top of the code. If you want to set the color space with a new lightness axis, which connects two points, black and white, use utils.set_lightness() and set its csv file path to color_space_lightness.
- Use the color space; get distanve between two colors, show 3D visualization, generate color palette, etc.
├── model # for model training step
│ ├── modelname1
│ ├──parameter/
│ ├──image/
│ ├──train history/
│ ├── modelname2
│ ...
├── color space # for actual use after selecting a model
│ ├── colorspacename1
│ ├──cross section/
│ ├──gradient/
│ ├──colorspacename1 3D.html
│ ├──colorspacename1 interval 5.csv
│ ├──colorspacename1 interval 15.csv
│ ├──colorspacename1 lightness.csv
│ ├──colorspacename1 3D.html
│ ├──...
│ ├── colorspacename2
│ ...
└── model.py
└── train.py
└── ...
Our project can be said to make computers perceive colors like the human eye. The next step is to make computers understand and feel colors like humans.
We are also eager to make an user-friendly web version of our product.
2022-1 College of Liberal Studies, Seoul National University.
Creative and Interdisciplinary Seminar: Digital Humanities (Professor Javier Cha)
Final project by Team Incógnito.
Teammates: Boseol Mun (@healthykim), Tanya Khagay, Jinhyeong Kim