My Dinner with Cyborg Neil Harbisson
One of the highlights of my recent trip to speak at the MBN Y Forum in Korea, was speaking on a panel at the VIP dinner with Cyborg Neil Harbisson. As far as artists go, Harbisson is about as avante garde as they come. That antenna you see hovering over his head is beyond your typical IOT wearable. It is what Harbisson considers an extrasensory organ that helps him detect colors outside the visible spectrum. And he more than just wears it well, it is surgically implanted into his skull.
During the dinner I think I asked more questions for him than the rest of the room combined. When flash photography went off, I would ask him if there were any extra colors. I was also curious what the sensation of the infrared and ultraviolet colors felt like. Also asked him if he felt he was missing things when his cyborg appendage was off, to which I learned he could not turn it off anymore than we could turn off our own natural senses. He answered all my question patiently, though I hope I was not too annoying. But he was just too fascinating to not harrass. Harbisson was attempting to give his body more senses than he was born with. I had to learn as much as I could from him.
So I am not sure if I consider him a robotic artist in the traditional sense, except that he himself is attempting to become a robot. He has begun championing cyborg rights and is serving as a visionary to help ease what he considers is our inevitable evolution into cyborgs. Think about what he is championing and things become very interesting. In his talk he pointed out that never before in history have we had control over our evolution, but now we do. And not just on a mechanical level, but on a biological level as well. Next 20-30 years will be interesting times.
Pindar
Simon Colton and The Painting Fool
Avoiding Uncreative Behavior
Was excited to recently make contact with Simon Colton, the artist and developer behind The Painting Fool. After brief twitter chat with him, heard about his thoughts on the criteria that made things "uncreative". If I understood him correctly, it is not so much that he is trying to make software creative, but that he is trying to avoid things that could be thought of as "uncreative", such as random number generation.
He directed me to one of his articles that went into details on this. What I read was really interesting.
The work begins with a rather elegant definition of Computational Creativity that I agree with.
Computational Creativity: The philosophy, science and engineering of computational systems which, by taking on particular responsibilities, exhibit behaviours that unbiased observers would deem to be creative.
There are many interesting thoughts throughout rest of paper, but the two concepts that I found most relevant were that
1) artificially creative systems should avoid randomness
and
2) they should attempt to frame, or give context to, what they are creating.
He begins by criticizing the over reliance on random number generation in computationally creative systems. Using the example of poetry, Colton writes that software could use random number generation to create a poem with "exactly the same letters in exactly the same order as one penned by a person." But despite fact that both works are identical and read identically, the poem created with random numbers is meaningless by comparison to the poem written by a person.
Why?
Well there are lots of reasons, but Colton elaborates on this to emphasize the importance of framing the artwork, where
"Framing is a term borrowed from the visual arts, referring not just to the physical framing of a picture to best present it, but also giving the piece a title, writing wall text, penning essays and generally discussing the piece in a context designed to increase its value."
In more detail he goes on to say...
"We advocate a development path that should be followed when building creative software: (i) the software is given the ability to provide additional, meta-level, information about its process and output, e.g., giving a painting or poem a title (ii) the software is given the ability to write commentaries about its process and its products (iii) the software is given the ability to write stories – which may involve fictions – about its processes and products, and (iv) the software is given the ability to engage in dialogues with people about what it has produced, how and why. This mirrors, to some extent, Turing’s original proposal for an intelligence test."
This view is really interesting to me.
In my own attempts at artificial creativity, I have always tried to follow both of these ideas. I avoid relying on random number generation to achieve unexpected results. And even though this has long been my instinct, I have never been able to articulate the reason why as well as Colton does in this writing. Given the importance of a creative agent to provide a frame for why and how each creative decision was made, random number generation is a meaningless reason to do something, which in effect takes away from the meaning of a creation.
Imagine being struck by the emotional quality of a color palette in an artwork then asking the artist why they chose that particular color palette. If the artist's response was, "It was simple really. I just rolled a bunch of dice and let them decide on which color I painted next." The emotional quality of the color palette would evaporate leaving us feeling empty and cheated that we were emotionally moved by randomly generated noise.
With this reading in mind and the many works of Simon Colton and The Painting Fool, I will continue to try and be as transparent with the decision making process of my creative robots as possible. Furthermore, while I do try to visually frame their decision making processes with timelapses of each painting from start to finish, I am now going to look at ways to verbally frame them. Will be challenging, but it is probably needed.
If you want to see more of Simon Colton and The Painting Fool's work check out the You Can’t Know my Mind exhibition from 2013.
Deussen & Lindemeier's eDavid
A couple of years ago a video started spreading that showed an articulated robotic arm painting intricate portraits and landscapes. This robot was named eDavid and was the work of Oliver Deussen and David Lindemeier from the University of Konstanz. While many painting robots had proceeded eDavid, none painted with its delicacy or captured the imagination of such a wide audience.
While the robot had remarkable precision it also seemed to have an artistic, almost impressionistic sensibility. So how did it go about creating its art?
When speaking of eDavid's, Deussen and Lindemeier see its paintings as more of a science than art. Their hypothesis is that "painting can be seen as an optimization process in which color is manually distributed on a canvas until one is able to recognize the content. - regardless if it is a representational painting or not." While humans handle this intuitively with a variety of processes that depend on the medium and its limitations, eDavid uses an "optimization process to find out to what extent human processes can be formulated using algorithms."
One of the processes they have nearly perfected is called feedback loops, a concept I use with my own robots and first heard about from painter Paul Klee. It is where you make a couple strokes, take a step back and look at them, adjustment your approach depending on how well those strokes accomplished your intent, then make more strokes based on the adjustment. You do this over and over again until you finish a painting. Simple concept right? And almost mechanical, but it is how many artists paint.
So to emphasize how well the robot has become at painting with feedback loops, I leave you with my favorite eDavid creation. Not sure what its title is, but how can you deny that the painting below looks and feels like it was painted by a skilled artist.
Mathew Stein's PumaPaint
I recently spoke with Mathew Stein about his painting robot PumaPaint. Way back in 1998 he equipped a Puma robotic arm with a brush, aimed a web-cam at it, and then invited the internet to crowdsource paintings with it. And he did all this before even crowdsourcing was even a word. In the first two years of the project alone over 25,000 unique users created 500 paintings. The robot continued creating crowdsourced painting for about 10 years.
I asked Mathew if he realized how ahead of its time his PumaPaint Project was. He laughed and said he had not realized it until the New York Times wrote an article about him.
Oddly enough though, Mathew Stein, does not seam to consider himself an artist, or even realize that his project was an interactive performance art piece. For him it was about the technology and interaction with people around the world. Successful exhibitions in today's art scene are all about audience interaction and experimentation with new media. Without even setting out to do so, Mathew Steins' PumaPaint achieved both on a global scale. People from around the world were able to use the newly emerging internet to control a teleoperated robotic arm and paint with each other. This would be a cool interactive exhibit by today's standards, and it was done 20 years ago.
Below are some examples of the crowdsourced art produced by PumaPaint. Mathew Stein considers the painting on the right from 2005 to be the single "most interesting piece from PumaPaint."
Whether or not Mathew Stein realizes he is an artist, I do. And much of my own robotic art has been inspired by his early work.
Harold Cohen's AARON
I recently received an email from Harold Cohen's assistant that he was sorry for not getting back in touch with me sooner, but it was because Cohen had passed away earlier in the month. This was earlier this year.
We had been talking at length about artificial creativity, and I was wondering why Cohen stopped talking with me all of a sudden. At the time he was helping me prep for my TEDx Talk on artificial creativity and was not shy in his critique of both my talk, and how he though I might be exaggerating my robot’s capabilities. As we talked, I found that our conversations on the subject often lasted far longer than it seamed either of us had planned for. The email his assistant was responding to was actually a draft of my TEDx Talk that I had sent him for review. I never heard back from him and figured that maybe he was no longer interested with my views on the subject. I had no idea his health was failing at the time.
In our talks I found his views on painting robots to be remarkably insightful and a little cantankerous. They were what you would expect from a man 40 years ahead of his time. His first painting robot AARON, was built in the 70s when no one else was even considering some of the concepts he was exploring. In our talks one of thing that stood out was his belief that a painting robots primary shortcoming was that it did not create its own imagery. He was obsessed with the idea that most were merely printers executing a filter on an image. Perhaps a filter more complex than something you find on Instagram or Snapchat, but a filter none-the-less. Though I can not find the quote I do remember reading something by him that was to the effect of "There are two kinds of painting robots. Those painting from photographs, and those lying about it."
I wish we had longer to talk with him ,because even though we disagreed on a lot, he was absolutely right about one critical aspect of robotic art. The ultimate goal is to break free from filters. I don't know what that means exactly, but whenever I create a new approach to artificial creativity, I ask myself how much of a filter it is, and try to make it less so.
Doug Marx and Luke Kelly's Vangobot
The painting robot called Vangobot is simply awesome. It is the rare breed of art creating robot that actually leaves behind an aesthetically pleasing piece of art. Most robots make abstract art or are performance pieces.
This is of course a self serving statement as I have made a similar robot, but just take a look at this one compared to mine. What I found most interesting is that while both myself and the artist/programming team made this with the same sort of approach, our robots are different in a number of ways, least of which stylistically. Their robot paints with multiple brushes and smoothly mixes colors with one another. Even though we share a basic approach our robots have their own style.
Vangobot makes what look to be impressionistic paintings. Smooth flowing strokes compared to my harsh and rigid straight lines. While I can do curves, I like the cross-hatching effect of mine. Similarly while Vangobot can probably do straight lines, it prefers algorithms that swirl paints onto the canvas.
What will be interesting as more and more artists realize how cool painting robots can be is to see all the styles that pop out of them. Unlike what I expect the public would think is going to happen, I bet each robot is as unique and different from one another as Vangobot is from my painting robot.
Almost like a personality.
If you have purchased one of mine, I highly suggest you get something from these guys and start a modern painting robot art collection. I guarantee you there will be many more entries into this genre in the coming years…