CLIPme if you can!

2022, May 12    
Justus Thies

CLIP is a powerful tool to match images to text descriptions. It can be used for classification of ImageNet classes as shown in the original paper, but it can also be used to assign names of (famous) people to images. Specifically, we assume that we have an image and a list of names of people in the image as input (also called queries below).

In a first step, we detect faces in the input image which can give us the regions of interest: Face Detections

Using these detections, we can run the CLIP model on the cropped regions of interest and compute the matching score w.r.t. the names of people we provide as input. We annotate the original image with the results that match best with the names:

Face Annotations

As can be seen, it works surprisingly well. Except for Hendrik Lorentz (which actually sits between Marie Curie and Albert Einstein), it matches all queries.

It also works well for other famous people, like state leaders: Face Annotations

Interestingly, CLIP also knows the relationship of the countries they represent: Face Annotations

Obviously, I had to try it with an image of myself and it turns out that it can also annotate my face: Face Annotations

Face Annotations

Conclusion:

CLIP not only stores general concepts of text and images, to some extent it is also storing information about individual people. We can also see this in the results of DALL-E-2, where we can ask to render a cat in an outfit like Napoleon.

Code:

GITHUB

Image Credits: