How is divyacakṣus - the divine eye made?


Boudoir // divyacakṣus - the divine eye

Quil Image Blending

First, we need to add a bunch of images in the images folder.

Then we call lein run -m blending.blending

Use rename.sh if your images' name contains special characters.

The script randomly selects an image and a blending mode and adds it to the canvas. The settings are also randomly selected (Quil API Blend)

 (ns blending.blending
  (:require [quil.core :as q]
            [me.raynes.fs :as fs]))

(defn setup []
  (q/background 0 0 0)
  (q/frame-rate 60)
  (q/set-state! :images (fs/list-dir (str fs/*cwd* "/src/blending/images"))))

(defn draw []
  (let [modes [:blend :lightest :difference
               :exclusion :multiply :overlay
               :screen :soft-light]
        images (q/state :images)
        eff-image (q/load-image (fs/absolute (nth images (q/random (count images)))))
        eff-mode (nth modes (q/random (count modes)))]
    (q/blend
     eff-image
     (q/random 250)
     (q/random 250)
     (q/random (q/width))
     (q/random (q/height))
     (q/random 1000)
     (q/random 1000)
     (q/random 1000)
     (q/random 1000)
     eff-mode)))

(q/defsketch Blending
  :size [1800 1000]
  :title "Blending"
  :setup setup
  :draw draw) 

The result looks like this:

[optional] Shuffle-merge

If you need some glitch in your video you can apply the shuffle-merge effect via vprocess.py

# Inside video-processing
./vprocess.py shuffle-merge --nb-chunk X --max-dur-subclip 1 --min-dur-subclip 0

The Douglas Vasquez Effect

We call the function the_douglas_vasquez_effect with the foreground footage as input_1 and the background footage (the quil blending we prepared earlier) with input_2.

Since this is a dirty script you need to tweak the code accordingly.

def the_douglas_vasquez_effect(input_1, input_2):
   clip1 = VideoFileClip(input_1) #.subclip(0, 50)

   clip2 = VideoFileClip(input_2) #.subclip(380)
   clip2 = clip2.resize((clip1.size[0], clip1.size[1]))

   invertc_counter = 0
   eff_img = None
   eff_effect = None
   counter = 0
   imgs = []

   def _f(gf, t):
       nonlocal invertc_counter
       nonlocal eff_img
       nonlocal eff_effect
       nonlocal counter
       nonlocal imgs

       counter -= 1

       img1 = gf(t)
       img2 = clip2.get_frame(t)

       if counter <= 0:
           if not imgs:
               imgs = ['img1', 'img2', 'img1+img2']

           random.shuffle(imgs)
           eff_img = imgs.pop()
           eff_effect = random.choice(BLENDING_MODES)
           counter = random.randint(25, 70)
           if eff_img == 'img2':
               counter += 30

       if eff_img == 'img2':
           return img2
       elif eff_img == 'img1':
           return img1
       else:
           invertc_counter -= 1

           if invertc_counter <= -50:
               invertc_counter = 10
           elif invertc_counter >= 0:
               img1 = e.effects.invert_color(img1)

           opacity = random.randint(1, 10) * 0.1
           return e.effects.blend_images(eff_effect, img1, img2, opacity=opacity)

   clip1.fl(_f).write_videofile("output.mp4", audio=False) 

The result looks like this: