This is not a flower.
This is not a hand.
This is not a city.
This is not a person.

And as it turns out, the music we’re listening to isn’t even music.​​​​​​​
So then, what is all of this?

While it doesn’t look or sound like it, it’s all just a bunch of data. This, of course, is a massively oversimplified way of stating it, however. Everything you’ve seen and heard up to now has been the production of GAN. GAN stands for generative adversarial network, and it’s a deep-learning framework that houses the product of two simultaneously trained models—one that is generative and one that is discriminative, or adversarial. Through numerous epochs wherein these models continuously interact with each other, the data improves, and the generative output converges, creating images, videos, and music, to name a few.

Now you might be asking, why go through all this work to create something that mimics the real world? Well, it turns out there are endless possibilities for how GANs can help us solve problems or complete tasks in the real world. For one, they’re great for generating novel data sets needed to overcome massive challenges, especially when it’s difficult to acquire sufficient data. They are also great for imaging and video work because of their ability to upscale resolution, seamlessly assist in editing, and create realistic filters and views. GANs can even help us in the medical field by removing noise from imaging such as x-rays. The possibilities are limitless and only constrained by one’s imagination, time, and ability.

We’ve already seen a range of applications for GANs and deep research into them. Among the talented pool of researchers improving upon Ian Goodfellow’s original creation in 2014, we have seen work that focuses on business, tech, art, influence, and more. A particularly artful exploration can be seen in Angela May Xie’s thesis for the Department of Computer Science at The University of Texas at Austin. This research, aptly titled “using a generative adversarial network to explore the new aesthetic” sought to address criticisms of the New Aesthetic. Artist James Bridle coined the term New Aesthetic in 2012 to articulate the human-digital and human-technology collaborative environment which we find ourselves in. In this blurred environment, we find ourselves shifting our behaviors, our worlds, and our representations. Indulging this, Xie’s New Aesthetic GAN generated artistic outputs were spectacular and holistically harnessed the essence and aura of the New Aesthetic. 

As we become continuously engrossed in the spaces of convergence between the physical and digital, let’s challenge ourselves to push the limits of what’s possible while staying true to our very human, non-digital selves. 
Back to Top