When artificial intelligence creates something; who gets the credit? Is it the programmer, the designer, the artist, or the AI itself? What about when AI destroys? Who or what takes the blame?

Panelists Jessica Fjeld, a clinical instructor at Harvard Law School Cyberlaw Clinic, Sarah Newman, a creative researcher at metaLAB at Harvard, Alexander Reben, an artist at Stochastic Labs/ANTEPOSSIBLE, and Sarah Schwettmann, a computational neuroscientist at the Massachusetts Institute of Technology, discussed these issues and other topics at “AI Creativity in Art, Neuroscience, and the Law” at SXSW in Austin last week.

Who gets the credit?
Fjeld asked panelists a few questions: Who is responsible for a work that’s created by a machine? Is it the artist who created the inputs for the AI model? Could it be the programmer who wrote the learning algorithm for the AI? At what point does the AI gain credit?

Schwettmann said her team had created an algorithm that “used machine learning and a set of example artworks to train a model on part of [an] artist’s artistic process and implement that model on a machine that could generate works in the artist’s characteristic style that were indistinguishable from the artist’s and run in parallel to the artist’s creations.”

Even though an algorithm from a team with input from an artist generated the creations, they cannot be displayed or published without the artist’s consent, Schwettmann said, due to a contract signed with the artist.

Reben agreed that the real question behind the generation of ideas or art is where ownership falls and where it extends with AI. For example, Reben designed a program that randomly generates ideas for patents and published them online.

“The idea is that if you are publishing an idea, say you’re publishing an academic paper or something online, it’s considered prior art and no one’s able to patent that idea anymore,” Reben said.

While 99.9 percent of the AI-generated ideas are nonsensical, Reben asked whether those 0.1 percent of ideas that would fall under prior art would belong to him or the AI that was creating them.

In this situation, would the random generation of ideas by the AI be enough to merit ownership? Schwettmann said we need to be careful when we talk about random processes.

“Talking about randomness is tough because at some level there was a model there, especially in terms of computers, especially in terms of AI,” Schwettmann said. “Somebody either wrote the code or the model was trained on a set of examples, and it could iterate on those examples and intentionally produce something.”

Could intentionality be an indicator of ownership by an AI?

“I think intentionality is really important to make something art,” Newman said. “I don’t think we should close a future where AIs can have intentionality.”

Schwettmann said perhaps there is a huge field of questions when it comes to AI and ownership. Her team at MIT is working on templates to distinguish ownership rights of materials.

“We’re working to develop a set of template legal agreements for collaborations between artists and hackers, developers, programmers, and to collect and publish a set of associated use cases,” Schwettmann said. “We fear a future where AI replaces the artist, and people are very wary about it, especially artists, but these tools and AI lend themselves to parallelization.”

Who takes the blame?
Isaac Asimov’s “Three Laws of Robotics” are: “1) a robot may not injure a human being, or through inaction, allow a human being to come to harm; 2) a robot must obey orders given it by human beings except when such orders would conflict with the first law; and 3) a robot must protect its own existence as long as such protection does not conflict with the first or second law.”

Focusing on the “First Law,” Reben’s team studied whether there were any AI systems in use today that had broken that law.

“We couldn’t think of any. The two closest are the Close-In Weapons … that has no real decision process being made here. This is basically a fancy landmine,” Reben said. The second is a “drone-controlled system. Drones do kill people, but there is still a human in the loop here.”

Since there were no clear examples of AI in violation of the “First Law,” Reben created one. The robot would choose whether or not to strike Reben’s finger, drawing blood, when it was placed near it.

Reben said there were distinguishing features of the robot’s choice to strike him. “That it’s a non-random and unpredictable decision. That is, it’s not a decision of me—the programmer—by proxy of the robot. So it makes the decision in a way that I can’t predict, yet it’s not random.”

It’s unexplained why the robot chose to strike Reben at a particular time and this unexplainable action could drive humans to imagine reasoning behind it, Newman said.

“Our human disposition to tell a story about what it is that’s happening, especially as we get to a place where it’s harder to understand and explain what’s happening,” Newman said.

In creating stories to explain the unknown, Newman said we project human values onto AI, but the AI doesn’t hold to human beliefs.

“How do we make sure the systems are developing values that are aligned with ours? This is especially challenging in a time where we can’t agree about what our values are,” Newman said.

As far as questions to ownership and blame, Schwettman said there are no right answers “except ones that avoid the fears that we have—like replacement of artists and creating terrible, horrible things. We want to create beautiful things.”