Advertisement
Singapore markets closed
  • Straits Times Index

    3,367.90
    +29.33 (+0.88%)
     
  • S&P 500

    5,475.09
    +14.61 (+0.27%)
     
  • Dow

    39,169.52
    +50.66 (+0.13%)
     
  • Nasdaq

    17,879.30
    +146.70 (+0.83%)
     
  • Bitcoin USD

    62,908.01
    +192.26 (+0.31%)
     
  • CMC Crypto 200

    1,345.13
    +0.63 (+0.05%)
     
  • FTSE 100

    8,128.58
    -38.18 (-0.47%)
     
  • Gold

    2,336.80
    -2.10 (-0.09%)
     
  • Crude Oil

    84.02
    +0.64 (+0.77%)
     
  • 10-Yr Bond

    4.4300
    -0.0490 (-1.09%)
     
  • Nikkei

    40,074.69
    +443.63 (+1.12%)
     
  • Hang Seng

    17,769.14
    +50.53 (+0.29%)
     
  • FTSE Bursa Malaysia

    1,597.96
    -0.24 (-0.02%)
     
  • Jakarta Composite Index

    7,125.14
    -14.48 (-0.20%)
     
  • PSE Index

    6,358.96
    -39.81 (-0.62%)
     

‘I discovered DALL-E and was blown away’: How artists are using AI to create very offline art

Karol Serewis/SOPA Images/LightRocket via Getty Images

Hello and welcome to Eye on AI.

In recent issues, I’ve covered why artists and creatives are pushing back on generative AI. In short, they worry about their work being used to train these models without consent, their work being devalued, and fewer opportunities for human creatives in a world where AI can be faster and cheaper. But even the more than 200 prominent musical artists who last week signed an open letter laying out such concerns emphasized they believe that “when used responsibly, AI has enormous potential to enhance creativity.”

We’re still very much in the early days of these issues and generative AI overall, and there are no easy answers for how to balance all this just yet. But as it’s unfolding, some artists are experimenting with how they can use generative AI as a tool to enhance, not hinder, their creative work.

One such artist is Sydney Swisher, an Illinois-based painter and photographer. I’ve been following Swisher for a while. Her unique works involve painting on fabric, immersing a scene—a quaint windowsill, a dresser adorned with jewelry and photos, a dreamy bedroom—into the pattern (usually floral) on a piece of fabric, bringing them together as one. The result is always beautiful, textured, and dimensional, and the whole thing feels incredibly offline. So I was shocked to learn that in between thrifting old fabrics and brushing on paint, Sydney uses generative AI to help bring her visions to life.

ADVERTISEMENT

After choosing a fabric to serve as the base for the painting, Swisher selects photos she’s taken to use as reference images, brings them into Midjourney, and adds prompt descriptions to describe a memory she has that the fabric seems to invoke. She then tweaks the prompts, adds more reference images, and plays with how the images and fabric can interact. After selecting one image Midjourney produced, she further edits it in Photoshop to get it closer to the specific memory she’s channeling in order to create a final reference image she’ll paint.

“I discovered DALL-E and was blown away by its ability to craft a scene I had in my mind. I found Midjourney around April of last year and was amazed at the quality of images and ability to edit specific details,” she told me, adding that the quality of Midjourney images won her over and that she also loves being able to feed it her own reference images, including old film photos from her childhood.

A large part of her art, Swisher said in a TikTok video describing her process, is based around machine intelligence visually interpreting descriptions of memories.

“I want my work to be a reflection on memory. I have very vague visual references in my mind of places that evoke strong nostalgia and emotion,” she said. “I'm interested in the exploration of trying to fully bring that time back.

She continued: “When you focus on trying to pinpoint every detail of a memory, pieces will always be blurry and you begin to question yourself. What parts are accurate? Did I dream that, or did it really happen? The first time I saw a generated image from DALL-E, I felt like it perfectly captured this feeling. To me, it truly looked like a still from a memory or a dream.”

In another example of creatives trying to play nice with generative AI, filmmaker Paul Trillo spoke on the Hard Fork podcast about his experience previewing Sora, OpenAI’s new text-to-video model that recently sent both Hollywood executives and much of the general public into a panic. Trillo’s account is one of the first hands-on reviews of Sora, and he describes how he tried to test what the model could do and found it had its own sense of editing and pacing.

“I was shocked. I was floored. I was confused. I was a little bit unsettled because I was, damn, this is doing things that I didn’t know it was capable of,” he said when asked what his initial emotional response was the first time he typed in a prompt and got a video.

Trillo said the video he created in days with Sora would’ve taken months with traditional tools—and also that the model “can give us some experimental wild, bold weird things that may be difficult to achieve with other tools.” Still, he thinks the model is only supplemental to human video production. Similar to Swisher, he views it as a new part of the process he can use to achieve his vision.

“I’m only going to focus on this tool to get everything out of my head,” Trillo said.

And with that, here’s more AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

Today's edition of Eye on AI was curated by Sharon Goldman.

This story was originally featured on Fortune.com