The Day a Robot Writes an Essay:
Legal Implications of AI-created Contents
As the use of artificial intelligence (AI) technology is becoming widespread, the legal implications of AI-created contents cannot be ignored. AI technology is being utilized in various scopes of work, from robot journalism to creating novels that stand on its own as well as pieces of artworks. At the same time, AI technology is not without its flaws: in several cases, AI technology has revealed that just because it is capable of “deep-learning” it is not omniscient. With the positive and negative effects of AI-created contents in place, it is necessary to review the current laws regarding these products and, when necessary, revise current legislations or put into place new ones. In cases such as United Kingdom and Hong Kong, legislation regarding the authorship and copyright of a computer-generated work exists; in other countries, like the United States and Japan, the copyright law specifically limits authorship to persons. Even in countries where the laws do not specifically refer to the authorship of computer-generated works, legislations are beginning to be drawn up. In a situation where the economic potentials and possible side effects of the commercialization of an imperfect technology is pitted against one another, proper legislation is necessary in order to ensure that development occurs quickly and safely.
In March of 2016, Google DeepMind’s AlphaGo turned the attention of the Korean public to the advancement of artificial intelligence in this day and era. Indeed, artificial intelligence (AI), which has been a long time coming, is no longer a thing of sci-fi films—the technology, whether one is aware or not, is being used in a multitude of different fields, including healthcare (IBM’s Watson) and legal systems (ROSS). Accordingly, many countries across the world are encouraging further development and application of AI technology. Such technological advancement is not without social, ethical, and legal dilemmas, however. For example, artificial intelligence is by no means omniscient—that much was made clear in the fourth game between AlphaGo and Lee Sedol in which the former lost to the 9-dan ranking player. Imperfections such as these inevitably give rise to the following question: who is responsible when an artificial intelligence makes a mistake that leads to financial losses or even fatal accidents? Even disregarding the question of liability and on a more positive note, AI technology nonetheless brings up the issue of ownership: how is society to handle the authorship of not only AI technology but also products generated by that technology? As countries today deal with AI technology and the necessary regulations that follow these developments, it is imperative to understand the scopes of AI-created contents as well as to explore the different legislative points that arise from the use of AI technology.
Ⅱ. Development of AI-created Contents
AI technology today is used to create not only relatively straightforward non-fiction pieces, such as news articles, financial reports, but also the more creative works, like paintings and novels. A key characteristic of AI-generated content lies in its autonomy. According to a study on the application of robot journalism in the Korean news industry, for example, the definition of AI-generated journalism, or robot journalism, is illustrated as “a method in which a robot autonomously analyzes data and produces articles based on given algorithms after being programmed by humans.”[i] In this aspect, a robot journalist is viewed as an “active member in the news generating process.”[ii] That is, artificial intelligence composes of intelligent agents that not only perform a given task but also learns from previous tasks and ultimately has a capacity for “continual learning.”[iii]
- Specific Cases of AI Technology Applications
Currently news outlets apply AI technology in the production of the relatively simpler news articles. For example, Los Angeles Times makes use of robot journalism in the reporting of earthquakes. Although the articles were initially published under the byline of the computer programmer, Ken Schwenke, as of 2018, the articles are published with the byline “Quakebot,” referring specifically to the “robot journalist” itself.[iv] Similarly, the Associated Press currently uses the Wordsmith program offered by Automated Insights, an American-based technology company, in producing articles relating to finance.[v]
In some cases, AI technology is used to produce more creative works. In 2016, AI technology was applied in the creation of a painting dubbed “The Next Rembrandt.” In this case, Rembrandt’s existing artworks were analyzed through the use of AI technology in order to create a new piece of painting that resembled the styles of an actual Rembrandt.[vi] Also in 2016, a robot-written novel, titled The Day a Computer Write a Novel, was entered into a national literary contest and made it past the first rounds.[vii] Although AI-generated contents have yet to be fully commercialized or mass-produced, especially in cases of the more creative works, the technology is certainly not far from becoming intertwined with the everyday lives of today’s society.
III. Legal Issues Regarding AI-Created Contents
A legal issue that must be addressed in the legislation process of AI-created content is that of copyright. While many countries have participated in the race for the development of AI technology, the speed with which these same countries deal with the idea of AI authorship in their legislations, or the lack thereof such a concept, vary.
In certain countries, the scope of authorship specifically refers to the topic of computer-generated content. In the United Kingdom, the Copyright, Designs, and Patents Act 1988 specifically refers to non-human works. In defining the scope of the “author” in relation to a work, the act states that “in the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”[viii] In this case, the authorship, and therefore the copyright, is assigned to the programmer of the AI software in question.
Hong Kong also addresses the authorship in situations where computer-generated contents are the subject of inquiry in stating that “in the case of a literary, dramatic, musical or artistic work which is computer-generated, the author is taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”[ix] Similar to the case of U.K. law, Hong Kong law also specifically accredits authorship to the programmer of the AI software.
In contrast, other countries, such as the United States, Japan, and Korea, neglect to specifically address the authorship pertaining to computer-generated or AI-generated contents. In the United States, the concept of authorship is attributed only to a human being. The Compendium of U.S. Copyright Practices states:
“The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being. The copyright law only protects ‘the fruits of intellectual labor’ that are founded in the creative powers of the mind. Because copyright law is limited to ‘original intellectual conceptions of the author,’ the Office will refuse to register a claim if it determines that a human being did not create the work.”[x]
The U.S. court has also ruled in line with this statute in Naruto, et al. v. Slater, et al., when it denied a monkey the copyright to its selfie because the monkey was “not an ‘author’ within the meaning of the Copyright Act”.[xi] While neither the law nor the court ruling makes clear whether the computer-generated works will be acknowledged as the work of the programmer, as was the case in the U.K. and Hong Kong, it can be said that the authorship of AI-created contents will not be attributed to the AI, at the very least. Thus far, then, the U.S. does not seem to extend authorship to robot journalists, painters, or authors.
Furthermore, the copyright office states that it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”[xii] If the continual learning, or deep-learning process of the AI technology—in which autonomy of the process is highlighted—can be understood as a process without the “intervention from a human author,” the U.S. copyright office does not seem to grant copyright registration to either the AI technology itself or its programmer.
One of the leading countries in AI technology, Japan is also a frontrunner in AI-related legislation. Like the U.S. law, Japan’s copyright law expressly denies the authorship to non-human entities when it defines author as “person who creates a work.”[xiii] At the same time, Japan is currently in the process of drawing up legislation regarding AI patents and copyrights. The rationale behind this expedited legislation is that Japan aims not only to prevent frivolous lawsuits but also to boost the industrial competitiveness when it comes to AI technology—that is, to incentivize AI development.[xiv] Thus while Japan does not yet offer specific legislations regarding AI-created contents, it emphasizes the necessity for one to be drawn up in order to achieve international competitiveness in the technological market.
Korea, like Japan and the U.S., does not mention authorship in the context of computer-generated work. In Article 2 of the Korean Copyright Act, an “author” is defined as a “person who creates a work” and does not address computer-generated works in terms of authorship. While there are no specific legislation enacted in Korea thus far, in early 2017 the Ministry of Science and ICT has put forward the tentatively titled “Intellectual Information Society Act (지능정보사회 기본법).”[xv] Although this act deals primarily with regulations of the technology itself, as opposed to addressing the copyright of AI-created contents, it nonetheless underscores the necessity for legislations regarding diverse aspects of AI technology.
Liability for the contents created by AI is also a crucial consideration point in dealing with AI legislations. The question of liability has already been a controversial topic in conversations of other AI-technology, most noticeably with the auto-piloted vehicles. For example, in the fatal Tesla crash incident in 2016, fingers have been pointed against not only the driver behind the car but also the company and the technology itself.[xvi]
Likewise, AI-generated contents are not free from questions of liability. AI contents have already shown glitches that may seem disturbing to onlookers. Microsoft’s AI chatbot Tay, for instance, tweeted racist and inflammatory comments as a result of the AI learning from other twitters on the internet.[xvii] In another example, a robot named Sophia, developed by a Hong Kong company, answered the question “Do you want to destroy humans?” with an eerie “Okay. I will destroy humans.”[xviii] Both incidents not only expose the potential problems that can arise from the application of AI but also lead to questions of accountability. Simply put, if accusations of hate speech are to be made in this case, who is to be held responsible? To take it a step further, if legal actions are to be taken in this or similar situations, whose names are to show up on the legal documents? Ultimately, who is to show up in court and who will pay the price?
Cases such as Tay and Sophia make it evident that AI-technology is not free from glitches. As articles, films, and novels, amongst others, are being created by AI technology, who is to take legal responsibility in cases of libel or plagiarism? As the use of AI-technology is becoming more and more widespread, legislators must deal with the question of liability in cases where the brunt of the work, if not all, is being carried out by non-human entities, a derivation from the legal status quo.
Today, robots are no longer limited by the confines of our imagination: soon, we may find robots winning Nobel literature prizes or selling their own paintings at several million dollars. To be sure, these particular images may seem still far-fetched, and certainly the commercialization of this technology may be years off in the making. Nonetheless, there is no doubt that artificial intelligence is making its way as a supplier of various contents in our society.
As artificial intelligence presents economic benefits as well a more economical means of production, countries are racing toward faster and better development of said technology. In this process, proper legislation regarding the products of artificial intelligence may be crucial. While creative contents may be more easily overlooked as opposed to the more fast-paced and lucrative inventions that also deal with AI, questions of authorship and liability must nonetheless be dealt with in order to encourage scientists and scholars in these fields as well. Ultimately, the right to reap the benefits of AI inventions as well as the possible repercussions of any technological backfires must be addressed to find the right balance between accelerated and safe advancements in artificial intelligence technology.
[i] Kim, D. (2015). Two essays on robot journalism in the South Korean newspaper industry. Doctor’s dissertation, Korea University, Seoul.
[iii] Kirkpatrick, J. et al. Overcoming Catastrophic Forgetting in Neural Networks, Proceedings of the National Academy of Sciences of the United States of America, Vol. 114 No. 13. (2017), p. 3521. Available at http://www.pnas.org/content/114/13/3521.full.pdf. (last visited Jan. 28, 2018).
[iv] Earthquake: 3.7 quake strikes near Santa Barbara, Los Angeles Times (January 6,2017).
[vi] A ‘New’ Rembrandt: From the frontiers of AI and Not the Artist’s Atelier, NPR (April 6, 2016).
[vii] Is the future award-winning novelist a writing robot?, Los Angeles Times (March 22, 2016).
[viii] United Kingdom’s Copyright, Designs, and Patents Act (1988), article 9.
[ix] Hong Kong’s Copyright law, article 11.
[x] U.S. Copyright Office, Compendium of U.S. Copyright Office Practices § 306 (3d ed. 2017).
[xi] Naruto, et al. v. Slater, et al., No. 15-cv-04324-WHO, available at http://files.courthousenews.com/2016/01/29/monkey%20selfie.pdf (last visited Jan. 28, 2018).
[xii] Supra note 10, § 312.2.
[xiii] Japan’s Copyright law, article 2.
[xiv] The Intellectual Property System Study Group for the Fourth Industrial Revolution, The Intellectual Property System for the Fourth Industrial Revolution (2017).
[xv] Artificial Intelligence (AI), Virtual Reality (VR), FinTech Regulations to Loosen, Hankook Ilbo (February 16, 2017), available at http://www.hankookilbo.com/v/152ff16fca624d9ea2d71696489d53ef (last visited Jan. 28, 2018).
[xvi] Driver in Tesla relied excessively on Autopilot, but Tesla shares some blame, federal panel finds, Los Angeles Times (September 12, 2017).
[xvii] Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter, The Guardian (March 24, 2016).
[xviii] Meet the first-ever robot citizen—a humanoid named Sophia who once said it would ‘destroy humans’, Business Insider (October 10, 2017).