In the process of choosing visuals for the video, I aim to match the mood of the music through tone and color. I prefer a more amorphous look as opposed to a realistic one, as it allows the listener to engage with their imagination and complete the visual.
Several tools were used in the process of creating the video. To start, the imagery was created in MidJourney and then in Stable Diffusion running on a Google Collab platform. To dial in the look I wanted, I utilized Img2img. Next, I ran the imagery through Deforum to create a video. For titles and finishing details, I used After Affects.
For a more in-depth look at how I use these platforms to create each video, stay tuned for my next piece in October!
Musician Extraordinaire Don Harriss has mastered the use of artificial intelligence to enhance his musical career and passion. Stable Diffusion AI creates images based on a user’s description. Harriss learned how to generate visual content on DALL-E-2, which he is now using as backdrops for videos featuring excerpts of his music. In his recent video, these excerpts, “Reflections,” are played as viewers are taken through visuals of promenades from day to night.
AI and Music: "Promenade"