This is a very interesting reform but it isn’t going anywhere.
AI can’t author anything. It may create random stuff like fractals or replicate things it sees or learns, like text, or social media comments generated by bot farms, but when it comes to independent AI “art” you just know that something is missing, like a robot playing classical composers on the piano, no matter how perfect, it is super boring, unless it is an exact replica of a human virtuoso’s performance. It is called reproducing a sound recording as a robot.
I won’t even get into (the mild horror of) musical compositions by robots. The most incompetent human DJ (whose beats receive zero copyright protection under the current framework) wins over a bot on skill and judgment without even trying, for one because the emotion/intuition element required to make an original work is missing from AI. Sure you can feed a few hundred loops into a bot and tell it to mix sound, but it just means you are giving it more things to copy, and not the ability to produce its own beats in sync with emotion architectures that are unique to human creators.
Certain types of text, such as citing jurisprudence, or applying law to facts, as is the case with court rulings, do not require any emotional input to make an original work, quite on the contrary. These are repetitive and easily automatable tasks, such as applying settled law to sets of facts that an algorithm has encountered in previous judgments. Bias often seeps through in court rulings in one way or another. In that limited sphere, algorithms are better suited than humans to flag, analyze and assess sensitive facts without parasitic emotional input, so long as they are properly programmed to be bias-free and do just that without taking in account parties’ personal characteristics to their detriment. AI can be tweaked to be objective, but humans cannot be re-programmed to stop perpetuating bias. The European Court has been using AI to write judgments since 2016. There is no copyright on judgments and this kind of debate is excluded from copyright reforms.
In every other sphere where original works don’t rely on rigid, repetitive, and automatable tasks, AI fails on skill and judgment.
Imagine a robot suing you for copyright infringement
I personally am not inspired by this reform. It reminds us that humans are behind everything that AI does.
“AI generated” copyright goes to its coders and whoever or whatever owns the code and coders, most of the time corporations, if the resulting works are created independently of human input.
In a different scenario, where a user creates content via properly licenced AI, right to content generated by a user via automated creator tools goes to the user under the user-generated-content provision. Rights will be modulated by the nature of the content. They could be simple economic rights, or very limited fair-use and strictly non commercial rights, if the content incorporates other copyrighted material, so I don’t see what would have to change in that regard.
Same with liability, whoever or whatever owns and controls the code and coders of AI is liable for damages caused by said AI. Algorithmic bias is 100% man-made. Hiding behind AI is quite a nice try.
I personally know more robots that infringe copyrighted material than robots who create original works. The whole idea of machine learning is to absorb and internalize human-made content through (surprise) copying, replicating, reproducing, and of course mining, collecting, analyzing, cross-referencing, compiling, and rearranging users’ personal information, so as to imitate human activity.
When it comes to licensing, the act only requires you to put licences in writing. Everything else, such as drafting and clauses is governed by provincial contracts law, consumer law, etc. Abusive clauses certainly abound, but copyright reform cannot address that.
Here are some of the questions for reflection from the working document.
- How are individuals and organisations using AI to produce or to assist in the production of works or other copyright subject matter?
- Are AI-assisted works the result of deliberate choices by humans (potentially exhibiting skill and judgment), are there important variations in that regard depending on the AI application, and how could that change as AI becomes more autonomous?
- What challenges or disputes are being encountered when determining copyright and authorship or ownership for works or other subject matter produced by or with AI?
- Is the uncertainty surrounding authorship or ownership of AI-assisted and AI-generated works impacting the development and adoption of AI applications to produce works or other subject matter? If so, how?
- What risk mitigation measures are businesses taking to protect their investments when using AI to produce works and then commercialising those works? Similarly, what risk mitigation measures are businesses taking when commercialising AI applications that can be used to produce works?
- What kind of licenses are being employed for the use of works or other subject matter produced with AI? What are the implications for licensing if those that develop the AI are deemed to be the authors or owners of AI-generated works?
I understand that “stakeholders” want clarity, but they rather seem to want appropriation of works created through their apps, and at the same time they want zero liability for IP damages caused through their apps. Maybe, but no. One doesn’t go without the other. If you want to appropriate users’ content, you have to block use of copyrighted material and police potential infringement. It is not workable, unless all fair use and user-generated provisions are abrogated from the act, which would defeat the purpose of the act.
On liability, the questions are as follows:
- When commercializing AI applications, what measures are businesses taking to mitigate risks of liability for infringement for the AI application itself and for an AI-generated or AI-assisted work?
- What challenges are copyright holders facing when licensing their rights in the context of AI? What challenges are copyright holders facing when enforcing their rights in the context of AI, and how could these be solved?
- What are the barriers to determining whether an AI accessed or copied from a specific work during the process of generating, or contributing to, an infringing work?
- To what extent do AI applications contain reproductions of the copyrighted content used in training them? Are there important variations across types of AI?
- Are creators and users of AI applications facing additional risks of infringement for activities besides reproduction (e.g. making AI-generated or AI-assisted content available online)?
- Similar to the question in section 2.2 above, who are the different human parties involved in creating an AI system that can generate works, or assist in generating works, and what factors affect their role in that process?