Reviewing Article 17 DSM and the Impact of Content Recognition Technology on our current Legal Framework, Digital Age and Fundamental Rights Name student: H.A. (Hidde) de Voogt Student number: 2683761 Thesis supervisor: M.Y. Schaub Date: July 5th 2021 E-mail: h.a.de.voogt@student.vu.nl ABBREVIATIONS CJEU Court of Justice of the European Union CRT Content Recognition Technology DSMS Digital Single Market Strategy DSMD Digital Single Market Directive ECD E-Commerce Directive ED Enforcement Directive EU European Union EUCFR Charter of Fundamental Rights of the European Union IFPI International Federation of the Phonographic Industry INFOSOC Information Society Directive OCSSP Online Content-Sharing Service Provider OECD the Organization for Economic Co-operation and Development UGC User Generated Content 2 TABLE OF CONTENTS TABLE OF CONTENTS ..................................................................................................................................3 CHAPTER 1: SUBJECT AND RESEARCH ............................................................................................ 4 1.1 BACKGROUND.......................................................................................................................................4 1.3 METHODOLOGY ...................................................................................................................................6 CHAPTER 2: THE VALUE GAP ............................................................................................................. 7 2.1 USER GENERATED CONTENT PLATFORMS .........................................................................................7 2.1.1 User Generated Content on YouTube ............................................................................................7 2.1.2 Content ID .......................................................................................................................................8 2.2 THE VALUE GAP...................................................................................................................................8 2.2.1 Definition .........................................................................................................................................9 2.2.2 Critique on the Value Gap ..............................................................................................................9 2.2.3 Scale and numbers ........................................................................................................................10 CHAPTER 3 – PLATFORM LIABILITY UNDER CURRENT LEGISLATION ................................... 11 3.1 THE SAFE HARBOR ............................................................................................................................11 3.1.2 Applicability ...................................................................................................................................11 3.1.3 Scope of Protection........................................................................................................................13 3.2 GENERAL MONITORING PROHIBITION .............................................................................................13 3.3 INFORMATION SOCIETY DIRECTIVE (EU) 2001/29/EC ARTICLE 3(1) ............................................16 3.3.1 Acts of communication to the public ............................................................................................16 3.4 CONCLUDING REMARKS ....................................................................................................................16 CHAPTER 4: DIRECT PLATFORM LIABILITY UNDER DIRECTIVE (EU) 2019/790 ...................... 18 4.1 APPLICABILITY OF ARTICLE 17 DIRECTIVE (EU) 2019/790 ............................................................18 4.2 ACTS OF COMMUNICATION TO THE PUBLIC .....................................................................................19 4.3 LIABILITY AND LIMITATIONS ............................................................................................................20 4.4 LICENSING ..........................................................................................................................................20 4.4.1 Differences.....................................................................................................................................20 4.4.2 Challenges .....................................................................................................................................20 4.5 UPLOAD FILTERS ...............................................................................................................................22 4.5.1 Differences.....................................................................................................................................23 4.5.2 Challenges .....................................................................................................................................23 4.6 CHALLENGES TO HUMAN RIGHTS .....................................................................................................26 4.6.1 Freedom of Expression and Information.....................................................................................26 4.6.2 Freedom to Conduct a Business ...................................................................................................28 4.7 EXCEPTIONS .......................................................................................................................................28 4.7.2 Challenges .....................................................................................................................................29 4.7 CONCLUDING REMARKS .....................................................................................................................29 CHAPTER 5: ALTERNATIVE SOLUTIONS TO THE VALUE GAP ................................................... 32 5.1 LEGAL UNCERTAINTY ........................................................................................................................32 5.2 LICENSING ..........................................................................................................................................32 5.3 UPLOAD FILTERS ...............................................................................................................................33 CHAPTER 6: CONCLUSION ................................................................................................................ 37 BIBLIOGRAPHY ................................................................................................................................... 40 3 CHAPTER 1: SUBJECT AND RESEARCH 1.1 Background Global recorded music revenues were growing strongly from 1973 through 1999.1 As music was strictly accessible through physical copies, vinyl and CD’s dominated the market. In 1999, content became digitally available on Napster, the first website to enable peer-to-peer sharing of music. This marked the start of the rise of pirating sites, through which consumers had access to unlicensed illegal content shared without the consent of rightsholders. Artists did not receive any remuneration for their creative works and global music revenues fell drastically as a result. Streaming platforms like Spotify and Apple music brought opportunities to fill the gap between the growing amount of digital content and the declining income for the music industry relative to the total income. 2 Streaming formed a legal alternative to pirating websites and the general trend shifted towards on-demand streaming. As we will see, artists benefit greatly as the platforms pay remuneration for all content they provide for their customers through money earned with subscription fees. At first sight, this seems like a viable solution for equitable remuneration in the digital age. However, during the last decade we have seen a surge in so called User Generated Content (UGC). The participative web has opened the opportunity for users to interact with each other and contribute to an open, democratic exchange of views and ideas through new websites, social media and open discussions. 3 At the moment, the most used UGC-platform is YouTube. Users create and upload videos for others to watch. It is inevitable that some UGC is infringing. The question therefore becomes what YouTube's role should be in balancing the interests regarding fair remuneration for rightsholders on the one hand, and those of platforms, internet users and the free web on the other. In any case, our current legal framework exempts YouTube from direct liability in situations where the platform meets the criteria set forth in Article 14 of the E-Commerce Directive (ECD). Article 14 ECD therefore lays the foundation for the so-called safe harbor legislation. Additionally, Article 15 of the E-Commerce Directive (ECD) ensures that Member States shall not impose a general monitoring obligation on providers of UGC-platforms.4 Upon reading the provision, one would think that UGC-platforms cannot be required to proactively filter content before being posted by one of its users. After all, this would constitute a general monitoring obligation. However, according to the L’Oréal v. eBay 5 and Glawischnig-Piesczek v. Facebook6 cases, platforms can be ordered an injunction to prevent further infringements of the same kind, by the same seller and in respect of the same trademarks.7 Rightsholders therefore have the possibility to block protected (future) content through injunctions. Still, the exemption of UGC-platforms from liability for content uploaded by its users ultimately seems to have led to a mismatch between the value that online sharing platforms extract from creative content and the revenue returned to the copyright-holders. This is otherwise known as the Value Gap. 1.2 Purpose and legal problem statement The European Council adopted the Directive on Copyright and related rights in the Digital Single Market (DSMD), which entered into force on June 6th 2019. The Directive paves the way towards a true digital single market.8 Article 17 DSMD regulates online content-sharing service providers (OCSSPs) with the aim to safeguard fair remuneration of creators and tackle illegal content on the internet, while ensuring freedom of expression and 1 Barker, G.R. (2019), p. 6. 2 Wolfson, S. (2018). 3 Senftleben, M. (2019), p. 2. 4 Ibid., Article 15. 5 Case C-324/09, L’Oréal SA v. eBay International, 2011. 6 Case C-18/18, Glawischnig-Piesczek v. Facebook Ireland, 2019. 7 See: Recitals 45, 46 & 47 of Directive 2000/31/EC; Case C-324/09, L’Oréal SA v. eBay International, 2011, par. 131. 8 The European parliament and the Council of the European Union. (2019). Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC. 4 information with regards to legal content.9 It constitutes the first piece of legislation to address the Value Gap and is the product of extensive lobbying by the music industry. Hosting platforms (OCSSPs) falling within the scope of the DSMD no longer benefit from Safe Harbor legislation, at least not with regards to copyright related claims. For other cases the Safe Harbor legislation from Article 14 ECD continues to exist.10 Therefore, the DSMD will enable direct platform liability for copyright-related UGC. In principle, OCSSPs are directly liable for infringing content posted by its users. Article 17 DSMD constitutes two ways for OCSSPs to circumvent this liability. First, they can forge licensing agreements with rightsholders, which must be fair and appropriate.11 Concluded licenses exempt YouTube from liability for content uploaded by its users, mainly due to the fact that it introduces a stream of income for artists. Second, an OCSSP may prevent availability of infringing works on their platform altogether.12 OCSSPs have to make best efforts in order to guarantee the absence of specific works and other subject matter.13 By making best efforts, the basis for liability ceases to exist. Both obligations, licensing and best efforts, pose dangers to the web 2.0 as we know it, but also to fundamental rights of parties involved and to EU copyright law acquis. Firstly, the legislative design of Article 17 DSMD clearly favors the authorization avenue. However, rightsholders are not obliged to give an authorization or to conclude licensing agreements.14 Furthermore, the licensing obligation may prove difficult for platforms to adhere to.15 It would be very hard, if not impossible, for a platform to obtain licenses to all available content for its users to create new content with, especially when rightsholders are unknown. This could affect the web as we know it, which enables users to enjoy fundamental freedom of expression and information on a larger scale and in great diversity. Additionally, the ‘best efforts’ requirement to guarantee the absence of ‘specific works and other subject matter’ may pose challenges. Currently, the only way to achieve this goal is by means of content recognition technology. While YouTube does already voluntarily implement this technology, it remains to be seen whether it proves accurate, effective, and able to block only the infringing content. Furthermore, the filtering obligation ignores the tension between the proposed Article 17 DSMD and Article 15 ECD, which constitutes a prohibition of a general monitoring obligation. 16 This could prove problematic as legitimate content could be blocked in an attempt not to be liable under Article 17 DSMD, harming the fundamental rights of free speech and information once more. The best efforts requirement might also prove too demanding for smaller businesses to adhere to. In turn, this might infringe the right to conduct a business. The European Council seems to have been persuaded that YouTube’s entitlement to the protection of the ECD storage Safe Harbor has not been providing a fair share of value for use of recorded music on the platform. Article 17 DSMD expelled YouTube from the Safe Harbor of Article 14 ECD, made YouTube liable under Information Society Directive (InfoSoc) Article 3(1) for unauthorized communication to the public, and it mandated that YouTube obtains licenses for the content it hosts. 17 Despite the fact that there seems to be a substantial amount of criticism, it is clear that the music industry has been underpaid by platforms like YouTube for too long. It is time to act, and the most suitable instrument is legislation on a European level. Based on the above, this thesis will examine the following research question: In what ways could challenges Article 17 DSMD poses to our current legal framework, the open web 2.0 and fundamental rights involved be resolved? 9 European Commission (2015). COM(2015) 192 final, par. 3.3. 10 Bridy, A. (2019). p. 323. 11 Directive 2019/790/EU, Article 17(1). 12 Ibid., Article 17(4) and Recital 39. 13 Ibid., Article 17(4) and Recital 66. 14 Ibid., Recital 61. 15 See, for example: Senftleben, M. (2019); Senftleben, M. (2018); Metzger, A. & Senftleben, M. (2020). 16 Stalla-Bourdillon, S. (2016); Case C-360/10, SABAM v. Netlog NV, 2012 E.C.R. 85, par. 51-53. 17 Bridy, A. (2019). p. 333 & 340. 5 Sub-questions: Chapter 2 What is the Value Gap? Chapter 3 What is the existing regulatory framework for liability of intermediary service providers? Chapter 4 In what ways does the new liability regime differ from the old situation, and what challenges does it pose? Chapter 5 In what ways, if at all, could the challenges identified in chapter four be addressed? 1.3 Methodology In order to answer the research question at hand, this thesis will be structured as follows. Chapter two provides the reader with a definition of what User Generated Content platforms are, using YouTube as an example. The chapter will also describe the way Content ID works, and how it played a role in the creation of the Value Gap. Additionally, the notion of a Value Gap will be defined, as well as its current size and critique. This is especially important to understand the necessity for new regulation regarding intermediary liability, in this case Article 17 DSMD. Chapter three aims to explain the current legislation, within which the Value Gap has come into existence. This chapter focuses on the E-Commerce- and the Information Society Directives. In addition, relevant case law is discussed to provide the reader with a substantive analysis of the current legal framework regarding the liability of User Generated Content Providers. Chapter three ultimately aims to prepare the reader for a comparison, as well as an identification of possible challenges in the following chapter. After all, a clear view of the legislative landscape before the DSMD enables a comparison between both situations in order to identify possible challenges. Chapter four introduces the reader to the solution to the Value Gap which has been introduced by the European Commission. The aim of this chapter is twofold. First, it aims to define the different elements of the new liability regime which follow from Article 17 DSMD. Second, Chapter four will compare both liability regimes in order to identify possible challenges regarding users’ fundamental rights, OCSSPs’ right to conduct a business and the participative web. This structure is used repeatedly within the various subparagraphs. A conclusion follows at the end of the chapter. Chapter five focuses on the research question at hand, based on obtained insights from the previous chapters. Possible solutions to the challenges identified in Chapter four will be discussed. The main goal of Article 17 DSMD is to provide equitable remuneration for artists, while not unnecessarily violating fundamental rights of those involved. In this regard there may be ways to improve the balance between fundamental rights involved, while still securing much needed remuneration for artists in our digital age. Lastly, the following remarks are of importance for a thorough understanding of the main objective of this thesis. First, by ‘the Music Industry’, I mean recorded music only. Current circumstances related to COVID-19 ensure that online services are more relevant than ever. Physical copies like CD’s and vinyl and are irrelevant for the research question at hand. I have excluded them from the calculations in Chapter two in order to define the Value Gap. Additionally, it is important to specify ‘artists’. By artist, I mean the natural person which has a revenue stream through his or her copyright, or otherwise earns money through their creative efforts within the music industry. It is therefore possible that one song/performance contains multiple copyrights. As the Internet and Music Industry are not geographically limited. I have chosen to address the topic from a European supranational perspective. Questions arising with regard to jurisdiction or applicable law will therefore be left outside the scope of this research. This inevitably means that I am relying on the text of directives, and not the national implementation thereof. Solutions as proposed in Chapter five will also be given within the international, European, context. 6 Chapter 2: The Value Gap What is the Value Gap? This Chapter focuses on UGC-platforms and the role they have played in the creation of the Value Gap. Chapter two ultimately aims to define the elements which have led to the Value Gap, and to investigate its current size as well as critique. By doing so, this chapter introduces the reader to the problem at hand. By answering the sub question, the foundation is laid for further exploration of the main source of the Value Gap in the remainder of this thesis. 2.1 User Generated Content Platforms User Generated Content is a rather vague term and can have a broad definition. The term User-Generated Content refers to all kinds of information distributed by individual users via internet services. This information may have been created by the users themselves or by third parties (in this case also referred to as user-submitted content).18 However, in this thesis a stricter interpretation is needed in order to use YouTube as an example in the following Chapters. A UGC-platform within the meaning of the DSMD can therefore be defined more specifically. The three core elements of UGC given by the Organization for Economic Co-operation and Development (OECD) can prove useful.19 The OECD is a unique intergovernmental forum which provides a setting for Member States to compare policy experiences, seek answers to common problems, identify good practice and coordinate domestic and international policies. Currently, there are 37 member-countries and their definition of UGC is generally cited the most. 20 The definition of the OECD is especially useful for this thesis, because it is shared on a European, supranational scale. First, content must be “published in some context, be it on a publicly accessible website or on a page on a social networking site, only accessible to a select group of people.”21 Thus, the content must be shared through a platform or environment which enables a select group of visitors to view the content online. (publication requirement) Second, users must add value to their work by means of creative additions. Creative additions imply that a certain amount of creative effort was put into creating the work or adapting existing works to construct a new one. 22 (Creative effort requirement) The minimum amount of creative effort is difficult to grasp and depends heavily on the circumstances, but generally does not constitute a high threshold. Even a home video of a kid, dancing to the rhythm of her favorite tune or the remix of a song can constitute a creative effort.23 Another example is a movie review, in which the user gives his own opinion about a film, using images from the film itself. Third, UGC is created outside of the commercial spheres. While the platform itself may have commercial incentives, the content is created by its user, without “an institutional or a commercial market context”.24 (requirement of creation outside of professional routines and practices) While the open nature of the core elements make it difficult to define a UGC-platform, these requirements do offer sufficient precision to determine which platforms are, and which are not a UGC-platform. In its core, UGC- platforms can be defined as platforms which enable its users to publish content for a specific group of people to view. The content only becomes user generated after the creative effort requirement has been fulfilled. Additionally, the content itself must not be created by a commercial entity, but rather by genuine users of the platform. 18 Reus, J. (2012), p. 413. 19 Wunsch-Vincent, S. & Vickery, G. (2007). 20 See, for example: Christodoulides, G. (et al.) (2012), p. 3. 21 Wunsch-Vincent, S. & Vickery, G. (2007), p. 8. 22 Ibid. 23 Lessig, L. (2008). 24 Ibid. 7 2.1.1 User Generated Content on YouTube YouTube was one of the original competing services aiming to remove the technical barriers to the widespread sharing of online videos. The website provides a simple, integrated interface which enables users to upload, publish, and view videos without high levels of technical knowledge.25 Ever since its establishment in 2005, the platform has used the slogan “broadcast yourself” to encourage users to create their own content and transform themselves from mere consumers to video producers. Over time, YouTube has grown to be the biggest UGC- platform in existence. Over 500 hours of content is uploaded every minute.26 As of 2020, YouTube was the number one site for web traffic worldwide and has therefore surpassed its goal. 27 YouTube fits perfectly within the description of a UGC-platform as given in the previous paragraph. After all, it enables users to (1) create their own content, (2) publish it under their own YouTube account name and (3) generally do not act from a commercial or institutional point of view. Noteworthy in this regard is the fact that some popular content creators develop a professional YouTube career, which enables them to earn a living.28 The top three creators managed to earn over sixty million US dollars from June 2018 through June 2019 alone. 29 In my opinion this automatically means that they no longer find themselves outside of a commercial market perspective, as their main goal is to make profit from the content they upload. Therefore, in some exceptional cases, content can fall outside of the definition of UGC, but it generally does not. 2.1.2 Content ID Paradoxically, the fact that users earn remuneration on their created content in the form of ad revenue can also be beneficial for rightsholders. In 2007, YouTube introduced a new content recognition technology called ‘Content ID’. The technology is used by YouTube to match reference files with UGC, allowing them to recognize copyrighted video and audio. At the time this technology was especially welcome, because the settled legal framework regarding copyright infringements by UGC-platforms was the notice and takedown procedure, which forced YouTube to either delete copyrighted content after a notice from the relevant rightsholder, or pay them a share of the revenues. In short, Content ID enables rightsholders to claim a share of the ad-revenues earned through a video by filing a notice. Rightsholders file a notice with YouTube by attaching a reference file, which is stored in a database. Content ID creates a fingerprint of every uploaded user file. This information is then used to query the database populated with fingerprints of reference files provided by rightsholders. 30 Every match gives rightsholders the ability to either monetize their content or have it blocked. Content ID therefore offers rightsholders two major benefits over notice and takedown: It continuously monitors content uploaded to YouTube for copyrighted works stored in a reference file, relieving them from the hassle of sending notices. Additionally, it enables them to monetize user infringements instead of blocking them, constituting a new stream of income.31 Content ID is has become more relevant than ever. While the technology has been improved upon over time due to technological advancements, it still remains to be seen whether its capabilities meet expectations. This will be further elaborated on in Chapter four. 2.2 The Value Gap As the notice and takedown procedure initially only enabled rightsholders to block infringing content, Content ID seems to have introduced a new stream of income for the Music Industry by allowing them to monetize illegal content. However, the monetization of infringing content does not seem to have satisfied the Music Industry. 25 Burgess, J. & Green, J. (et al.). (2009), p. 1. 26 Smith, K. (February 21 2020). 27 Hardwick, J. (May 12 2020). 28 See, for example: Kim, J. (2012); Salvato, N. (2009); Mansoor, I. (2021). 29 YouTube Company Report (2020), p. 3. 30 See: YouTube creators, YouTube Content ID, YouTube (September 28 2010). 31 Bridy, A. (2019), p. 330. 8 Hence, in 2015, the term “Value Gap” was first coined. 32 Before elaborating on the Value Gap, it is necessary to define the problem at hand. 2.2.1 Definition The Music Industry has complained for a long period of time that YouTube undercompensates music rightsholders for streams of videos uploaded by users, containing claimed copyrighted content. 33 Partially, this is due to the fact that YouTube has a strong position to negotiate the height of remuneration, because artists can use the platform to advertise and release their latest music. An example: Taylor Swift’s “Look What You Made Me Do” managed to yield 43.2 million views in its first 24 hours.34 This will generate brand awareness as well as ad-revenue. It is obvious that artists do not want to miss out on this opportunity to promote their music worldwide, free-of-charge. Consequently, even where voluntary deals are being reached between rightsholders and YouTube about remuneration of infringing content, those deals may still not satisfy the requirement of a “fair share” of the value created through rightsholders’ intellectual property rights. After all, YouTube is a commercial entity aiming to maximize profit, which reflects in the amount of money they offer artists in return. Fundamentally, the Value Gap is the supposed mismatch between what YouTube pays rightsholders per stream and the income they generate through ad-revenues. This mismatch is caused mainly by YouTube’s strong bargaining position which, as we will see in Chapter three, is mainly due to our current Safe Harbor legislation. 2.2.2 Critique on the Value Gap Some academics argue that the Value Gap is a slogan created by Music Industry trade groups to sell policy makers on the idea that copyright Safe Harbors are not a sound policy choice for the whole internet, but a legal loophole that allows YouTube to unfairly exploit the music industry’s valuable intellectual property. 35 Simply put, they seek to redefine the scope of existing copyright Safe Harbors in the European Union to exclude YouTube from their protection. Another critique on the Value Gap is that the music industry’s alleged equivalence between services such as Spotify and UGC-platforms like YouTube is false.36 The main argument is that UGC-platforms make it possible for its users to compete for attention on equal terms with more traditional entertainment industry offerings.37 Services like Spotify on the other hand are solely focused on the distribution of music, lacking the creative effort requirement as well as including commercial elements through subscriptions.38 Additionally, Spotify knows exactly what content is available through its service. Because UGC-platforms enable users to decide on the availability of content, they operate within different business models and should not face the same legal risk. 39 Critique seems to focus on the fact that the legal framework which enabled UGC-platforms to exist, will be altered at the expense of fundamental characteristics of the platform and fundamental rights of its users. The question whether IP rights must indeed prevail depends largely on the size of the Value Gap, as well as the role UGC-platforms play within it. After all, there are other interests at stake, such as the open internet, the expressive rights of internet users, and UGC-platforms that allow internet users to take part in the digital economy and culture. At the same time, these interests are not absolute and intellectual property rights may deem regulatory action permissible or even necessary. Either way, it begs the question how large the Value Gap is, and how it has developed over time. 32 IFPI Global Music Report 2015, p. 22-25. 33 Bridy, A. (2019), p. 325. 34 YouTube Company Report (2020), p. 10. 35 See, for example: Bridy, A. (2019), p. 326. 36 Ibid., p. 327. 37 Sag, M. (2017), p. 518. 38 Note that YouTube offers a similar service nowadays, which also uses subscriptions as a commercial model. In my opinion, YouTube Music further clarifies the differences between YouTube as a UGC and streaming services such as Spotify and Apple Music. 39 See, for example: Bridy, A. (2019), p. 327. 9 2.2.3 Scale and numbers The Value Gap has continued to grow on average by 139 million US Dollar per year from 2013-2017, despite significant grow in streaming revenues.40 In 2019 streaming has – for the first time – made up over half of the global recorded music revenue: 56.1%.41 On the contrary, it is not streaming, but UGC services which are the biggest source of recorded music. YouTube secures 5 billion views per day, enabling users to watch over 1 billion hours’ worth of videos per day. Ad-supported streaming platforms have 358 million users and ‘only’ account for 14.1% of total revenue. Music Industry growth in 2019 was mostly due to an increase in streaming revenues, with an associated revenue increasing by 24.1%. An accompanying decrease in digital revenues excluding streaming of 15.3% seems to suggest that there is currently a migration of digital revenues towards subscription streaming formats. This begs the question what is contributed by YouTube as the biggest UGC-platform in existence. Ad-supported platforms (including YouTube) contributed just 14.1% of global Music Industry revenues in 2019, while YouTube alone is holding an estimated user base of more than 2 billion. Comparing these findings to those of subscription based platforms, shows us that these platforms contributed 56.1% of global industry revenues in 2019, with an estimated user base of 358 million. Subscription platforms therefore have 55.8% less users than YouTube, yet are generating 397,9% more revenue.42 This also means that YouTube only returned around 2 billion US Dollars to the Music Industry, while their total ads revenues were 15.1 billion. As the Value Gap exists and does not seem to be closing any time soon, the Music Industry has a legitimate claim on a bigger share of advertising revenues. As we have seen, YouTube is the biggest UGC-platform and still only disburses around 13 percent of total ad revenues to artists. The DSM Directive might enable the Music Industry to earn their fair share. Before exploring new legislation, the possible challenges it poses and to seek to answer the question whether there will be a fair balance between fundamental rights, the next chapter will delve deeper into the legislation which has enabled the Value Gap to grow in size. 40 Barker, G.R. (2019). p. 3. 41 IFPI Global Music Report 2020, p. 14. 42 These numbers are based on the relation between total revenue and total user count. All numbers are based on the Global Music Industry report 2019 and YouTube company report 2020. Streaming: 358 million users and 56.1% of total revenue. YouTube: 2 billion active users and 14.1% of total revenue. Subscription platforms may generate even more value in comparison, as the 14.1% accounts for all ad- supported platforms. This means that YouTube has paid anywhere between 1 and 14.1 percent of total revenue. 10 Chapter 3 – Platform liability under current legislation What is the existing regulatory framework for liability of intermediary service providers? Article 17 DSMD raises an important question: what were the prior responsibilities of UGC-platforms under the ECD and InfoSoc Directives? This chapter aims to provide the reader with an analysis of the existing regulatory framework, which is essential for understanding the changes and challenges the new DSM Directive might introduce. This Chapter begins by explaining the provisions of the E-Commerce- and InfoSoc Directives, which include the relevant provisions regarding the Value Gap. Additionally, existing as well as pending CJEU case law will be included to provide the reader with a substantive analysis of the current legal framework regarding the liability of User Generated Content Providers. Answering this sub question will enable the reader to fully comprehend the differences and challenges introduced by Article 17 DSMD, which are at the heart of Chapter four. Section 4 of Directive 2000/31, entitled ‘Liability of intermediary service providers’, includes Articles 12 through 15 of the Directive. It is important to note that the rules governing intermediary liability are not entirely harmonized within the EU. Especially the modalities through which liability is imposed on internet intermediaries, as well as the conditions required for such liability, remain fragmented across Europe.43 This chapter solely focuses on European law as well as CJEU jurisprudence. 3.1 The Safe Harbor Safe Harbor legislation is codified in Article 14 of the E-Commerce Directive. Ultimately, this is the provision on which the notice and takedown procedure is based. The Safe Harbor exempts intermediaries such as YouTube from liability for content posted by its users, but only when certain thresholds are met. In order to enjoy protection under Article 14 ECD, the provider of an information society service: (1) must not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; and (2) upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.44 3.1.2 Applicability The Safe Harbor provision is applicable to providers of information society services (ISSP) which store information provided by a recipient of the service. 45 The term service provider is laid down in Article 2(b) E- Commerce Directive and is constituted by any natural or legal person providing an information society service. In turn, an information society service is normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of the relevant service. 46 Both at a distance and by electronic means suggest that the service has to work remotely, on the internet, by individual request of the user. The fact that the service must normally be provided for remuneration does not exclude free services to users, as long as it represents an economic activity.47 In YouTube’s case, revenue earned through advertisements enables them to offer their services for free, therefore constituting an economic activity. Consequently, YouTube theoretically falls within the scope of the Safe Harbor provision of Article 14 ECD. 43 See, for example: Angelopoulos, C. (2013), p. 22. 44 Directive 2001/29/EC, Article 14. 45 Ibid., Article 14(1). 46 Directive 2001/29/EC, Article 2(a) jo. Directive 98/48EC 1(2). (Article 1(2) has been codified in Directive 2015/1535 Article 1(1b).) 47 Directive 2001/29/EC, Recital 18; additionally, see: Case C-352/85, Bond van adverteerders v. The Netherlands State. 26 April 1988; as confirmed specifically for information society services in: Case C-291/13, Papasavvas v. O Fileleftheros Dimosia Etairia, 11 september 2014, para. 29. 11 However, not all information society services automatically enjoy Safe Harbor protection. It is also relevant whether the service provider qualifies either as neutral or active. These definitions have been further developed by the CJEU in Google France and L’Oreal v. eBay.48 Where the intermediary is predominantly passive or neutral, it may benefit from the ECD Safe Harbor. Where it is active, it will lose that privilege and its role shall be assessed according to national intermediary liability regimes.49 In this regard, Recital 42 states that the ECD covers only cases where the activity of the service provider is: (a) limited to the technical process of operating and giving access to a communication network over which information made available by third parties is transmitted or temporarily stored, for the sole purpose of making the transmission more efficient; and (b) this activity is of a mere technical, automatic and passive nature, which implies that the information society service provider has neither knowledge of nor control over the information which is transmitted or stored. The application of Recital 42 has led to confusion about the scope of hosting information society services, and CJEU Case Law has not managed to fill this gap. In L’Oreal v. eBay, the Court states that the mere fact that online service providers (in this case a sales platform) “set the terms of its service, is remunerated for that service and provides general information to its customers” does not constitute an active role.50 However, an active role may be presumed if eBay would assist users in “optimizing the presentation of the offers for sale or promoting those offers”, as the platform would possess knowledge of, or control over, the data relating to those offers for sale. In that case the service provider cannot rely on exemption from liability referred to in Article 14(1) ECD.51 Case Law on this subject is scarce, and the question has never been addressed specifically with UGC-platforms such as YouTube in mind. In some cases, the Court assumes the passive role of a service provider by not discussing it at all.52 In several national cases though, YouTube was thought to play a passive role.53 At the same time, one could argue that YouTube plays an active role.54 There is no consensus and the Court of Justice of the European Union has yet to shine a light on this subject. Nevertheless, while YouTube did not yet exist when the E-Commerce Directive took effect, we can see that it was designed to encompass future developments. Recital 18 ECD mentions that the hosting of information provided by a recipient of the service is an ‘Information Society Service’. This forms the very core of what YouTube does. Additionally, legal texts are not usually applied only to technologies in existence at the time of their drafting. In my view, the term ‘storage’ is broad enough to comfortably encompass YouTube.55 The definition of YouTube as a provider of information society services is therefore debatable. Luckily the German national court referred this question (amongst others) to the CJEU. C-682/18 YouTube The referral in Case C-682/18 YouTube came from two German cases concerning illegal uploading of content on hosting websites and claims from rightsholders. 56 Advocate General Saugmandsgaard has delivered an opinion on 16 July 2020.57 The six questions asked by the referring court revolve around the issue of the liability of online 48 Opinion of Advocate General Poiares Maduro in Joint Cases C-236/08, C-237/08 and C-238/0 Google France v. Louis Vuitton Malletier, para. 143-145; Joint Cases C-236/08 to C-238/0 Google France v. Louis Vuitton Malletier, para. 113ff. 49 Hoboken, J. van, Quintais, J.P., Poort, J. & Eijk, N. van, (2018), p. 7. 50 Case C-324/09, L’Oréal SA v. eBay International, para. 115. 51 Case C-324/09, L’Oréal SA v. eBay International, para. 116. 52 See, for example, Case C-360/10, SABAM v. Netlog. 53 France: TF1 et autres c. YouTube, Tribunal de grande instance de Paris, 29 May 2012; Germany: OLG Hamburg, 1 July 2015, 5 U 87/12. For another case on YouTube as an information society service see: OLG München, 17 November 2011, 29 U 3496/11. Spain: Sent. JM n.7 Madrid, 20 Sept.2010; partially confirmed by AP Madrid (sec.28) January 14, 2014 [Telecinco v. Youtube] Westlaw.ES JUR\2014\36900. 54 Directorate General for International Policies (2018), p. 10; for additional examples in the UK, France, and Germany, see: Angelopoulos, C. (2017), p 23-30. 55 Angelopoulos, C. (2017), p. 10; Alternatively, see: Peguera, M. (2009), p. 496; Ginsburg, J. (2008), p. 594. The same reasoning has been used multiple times by the CJEU, see for example: Case C-236/08 and C-237/08, Google France v. Louis Vuitton et al, 23 March 2010, para. 110-111; Case C-324/09, L’Oreál v. eBay International, 12 July 2011, para. 109-110 and, specifically regarding a (social networking) platform Case C-360/10, SABAM v. Netlog, 16 February 2012, para. 27. 56 The relevant hosting providers in this case are Google (YouTube) and Cyando (Uploaded). 57 Joint cases C-682/18 & C-683/18, YouTube and Elsevier, ECLI:EU:C:2020:586, hereinafter referred to as ‘the case’. 12 platform operators regarding copyright-protected works illegally uploaded onto their platforms by users. 58 The case seeks to clarify the interpretation of Article 3 InfoSoc Directive (exclusive right for the rightsholder to communicate) and Article 14 ECD (exemption from liability for UGC). In particular, it is currently unclear whether the former provision is applicable to such platform operators, and whether they may rely on the latter provision and how those provisions are interrelated.59 Of course, the case will be determined through the lens of the legal framework prior to the DSM Directive. The conclusions in this case could therefore be fundamentally different from the view adopted by the EU legislature in the DSMD. C-682/18 YouTube confirms that the question of whether YouTube is an active host has remained unanswered until now. Nevertheless, valid arguments can be made on the basis of which YouTube can be defined as passive or neutral. The propositions of the AG advocate this as well.60 For a clear understanding of the protection YouTube enjoys under Article 14 ECD, we will have a closer look at the scope of this protection as well as possible limitations. 3.1.3 Scope of Protection As we have seen, Safe Harbor legislation protects providers of information society services from liability. However, in some cases, the protection granted by the Safe Harbor does not apply. Rather, the ECD has incorporated limitations. In Article 14(3) ECD, the legislator has chosen not to affect the possibility for national courts to require platforms to terminate or prevent an infringement, nor does it affect the possibility for Member States to establish procedures governing the removal or disabling of access to information. Consequently, the Directive provides rightsholders with the ability to block or remove content by means of national judicial procedures or duties of care.61 Content can only be detected and prevented on a large scale by means of content recognition technology, and not through manual labor. De facto, YouTube will have to implement upload filters in order to fulfil its obligations. Rules of national law must be designed in such a way that the objectives pursued by the Directive may be achieved.62 In this light it is remarkable that both the permissible scope of injunctive orders, as well as that of duties of care, may oblige YouTube to filter (future) infringing content in order to detect and delete. After all, this raises questions in light of Article 15 ECD and the prohibition on general monitoring.63 First, I will briefly explain what the prohibition of Article 15 ECD entails. Then I will focus on case law which further clarifies this prohibition in the light of the notice and takedown procedure and its limitations. 3.2 General Monitoring Prohibition Article 15 ECD, known also as the prohibition on general monitoring obligations, prohibits Member States to force intermediaries which enjoy protection under Article 14 ECD to: (1) generally monitor all content posted on its platform by users; or (2) actively seek facts or circumstances indicating illegal activity.64 The relation between the injunctive orders and duties of care from Article 14(3) ECD, and Article 15’s prohibition on general obligations to actively seek facts or circumstances indicating illegal activity, is cause for concern. 65 Article 15 ECD seems to limit the scope of both injunctive orders as well as duties of care, because it is hard to envisage a situation in which there is a duty to detect without seeking facts.66 58 Joint cases C-682/18 & C-683/18, YouTube and Elsevier, ECLI:EU:C:2020:586, par. 3. 59 Ibid., par. 4. 60 Ibid., par. 92, 167, 256. 61 With regard to duties of care, see Recital 48 Directive 2000/31/EC. 62 Case C-324/09, L’Oréal SA v. eBay International, 2011, I-06011, para. 136. 63 Angelopoulos, C. (2017), p. 14. 64 Directive 2001/29/EC, Article 15. 65 Commission Staff Working Paper (2011). SEC(2011) 1641 final, p. 47. 66 Bagshaw, R. (2003), p. 72. 13 Recital 47 ECD separates obligations of a general nature from monitoring obligations in a specific case. The most important section of Article 15 ECD therefore seems to be the phrase ‘general’, as other monitoring obligations seem to be permitted. The ratio between preventive obligations and Article 15 ECD has been addressed in four consecutive cases before the CJEU. The next section will address this Case Law in order to further define the difference between (permitted) specific filtering obligations, and those of a general nature. L’Oréal v. eBay International First, in Case L’Oréal v. eBay,67 the CJEU briefly set out the boundaries for the kind of obligations that may be imposed.68 It becomes clear from Article 15(1) ECD that measures required of the online service provider concerned cannot consist of an active monitoring obligation of all data of each of its customers in order to prevent any future infringement of Intellectual Property rights via that provider’s website. 69 The CJEU deems such measures disproportional and excessively costly to implement.70 In this case however, we also find a hint towards possible preventive obligations for online intermediaries which are in line with Article 3 of Directive 2004/48/EC. The Court states that if the platform, in this case an online marketplace, does not decide to suspend the perpetrator of the infringement of intellectual property rights in order to prevent further infringements of that kind by the same seller in respect of the same trademarks, it may be ordered to do so by means of an injunction. In a sense, one could say that this lays the basis for a specific monitoring obligation. In this case, specific seems to refer to a specific incident of copyright infringement, and not to all instances in which the specific trademark is used. SABAM v. Scarlet & SABAM v. Netlog The SABAM cases expanded on L’Oréal v. eBay. Both concern injunctive orders imposing filtering obligations on intermediaries, which would have to prevent relevant files from being made available to the public. 71 The envisioned filtering obligations of the SABAM cases were contrary to what L’Oréal v. eBay permitted. The requested filtering obligation would apply to all the intermediary’s customers, as a preventive measure, exclusively at the intermediaries costs and for an undefined amount of time.72 Additionally, in the L’Oréal case, the Court mentions Article 3 of the Enforcement Directive (ED) to confirm its conclusions. 73 According to this article, measures must be fair, proportionate and not excessively costly.74 Applying L’Oréal to the injunctive orders in the SABAM cases would, in my mind, prohibit such intended filtering obligations. The CJEU seems to be of the same opinion. The Court weighs the right to intellectual property against Netlog’s freedom to conduct a business, and observed that the filtering system would involve monitoring of all or most of the information on Netlog’s server in the interest of copyright holders, would have no limitation in time, would be directed at all future infringements and would be intended to protect not only existing but also future works. 75 Consequently, the national court would, upon adopting such an injunction, not be respecting the requirement that a fair balance be struck between the right to intellectual property and the freedom to conduct a business, the right to protection of personal data and the freedom to receive or import information. 76 This would result in a serious infringement of fundamental rights and, again, of Article 3(1) ECD.77 All three cases above can help to describe the ratio between preventive obligations and Article 15 ECD. It seems as if specific filtering obligations are permissible if they are fair, proportionate and/or not excessively costly for the platform to implement.78 Specific filtering obligations can then be used either voluntarily or in conjunction 67 Case 360/10, SABAM v. Netlog, para. 26, 36-37. 68 Case C-324/09, L’Oréal SA v. eBay International, 2011, I-06011. 69 Ibid., para. 139. 70 Ibid., additionally, see Directive 2004/48, Article 2(3). 71 Case 360/10, SABAM v. Netlog, para. 26, 36-37. 72 Angelopoulos, C. (2017), p. 14. 73 See Article 3 of Directive 2004/48/EC. 74 Case C-324/09, L’Oréal SA v. eBay International, 2011, I-06011, para. 139-140. 75 Case 360/10, SABAM v. Netlog, para. 45. 76 Ibid., para. 51; see, by analogy, Case C-70/10 Scarlet Extended v. NETLOG, para. 53. 77 Ibid., para. 46-47. 78 Additionally, see Recital 47 of Directive (EU) 2019/790. 14 with the duty of care or injunctive order. It is noteworthy that excessive use of voluntary filtering might result in the exercise of too much control. Consequently, the platform loses its neutral status as the service is no longer of mere technical, automatic and passive nature, and therefore may fall outside of Safe Harbor protection. Glawischnig-Piesczek v. Facebook Recent confirmation of specific filtering obligations can be found in Case Glawischnig-Piesczek v. Facebook, in which the Court clarified the injunctions national courts may impose without violation of the prohibition on general monitoring obligations. According to the CJEU, specific filtering obligations “appear to be sufficiently effective for ensuring that the person targeted by the defamatory statements is protected. [At the same time,] that protection is not provided by means of an excessive obligation being imposed on the host provider, in so far as the monitoring of and search for information which it requires are limited to information containing the elements specified in the injunction, and its defamatory content of an equivalent nature does not require the host provider to carry out an independent assessment, since the latter has recourse to automated search tools and technologies.”79 Injunctive orders and duties of care can therefore involve specific future infringing content as well. A specific monitoring obligation is not incompatible with Article 15 ECD. Obviously, this begs the question what separates specific- from general monitoring obligations. The terms identical and equivalent content can be used to define specific monitoring obligations. A host provider may be ordered to identify identical content for the purpose of enforcing an injunction. If a specific infringement has already been identified, the obligation to identify identical content by the same user does not constitute a general monitoring obligation.80 This also holds true for identical content posted by other users. Additionally, the Court ruled that OCSSPs may be ordered to filter and block equivalent content as well.81 Unlike the Advocate General,82 the Court does not limit the scope of specific filtering obligations to content originating from the same user. The reason being that the effects of such an injunction could otherwise easily be circumvented by slightly altering content which was previously found to be infringing. This, in turn, could result in the person concerned having to initiate multiple proceedings in order to bring an end to the conduct of which he is a victim. 83 Additionally, a specific monitoring obligation forcing the OCSSP to filter all content on its platform, is not prohibited.84 The Court rules that specific content must be properly identified in the injunction, and OCSSPs may not be forced to carry out an independent assessment of the content. 85 Any obligation contrary to (one of) these limitations may constitute a general monitoring obligation. While it has become clear that Articles 14 and 15 ECD rely heavily on the notice and takedown procedure, notices are not the only way of obtaining actual knowledge or awareness regarding infringing content. First, platforms are free to voluntarily seek infringing content. 86 Second, rightsholders can require platforms to take measures in order to prevent infringing content in the future.87 As has become clear from case law, such a specific monitoring obligation is not prohibited, if and when the obligation is not excessive, well specified in the injunction, and does not force OCSSPs to carry out independent assessments. The next paragraph will investigate the notion of carrying out an act of communication to the public, and its implications in light of Directive (EU) 2000/31/EC. 79 Case C-18/18, Glawischnig-Piesczek v. Facebook Ireland, para. 46. 80 Glawischnig-Piesczek, EU:C:2019:821, para. 37. 81 Ibid., para. 41. 82 Opinion of A.G. Szpunar in Case C-18/18, Glawischnig-Piesczek, EU:C:2019:458, para. 73 & 74. 83 Glawischnig-Piesczek, EU:C:2019:821, para. 41. 84 Kuczerawy, A & Rauchegger, C. (2020), p. 1504. 85 Case C-18/18, Glawischnig-Piesczek v. Facebook Ireland, para. 45. 86 In L’Oréal v. eBay, the CJEU states that it is also possible for intermediaries to undertake investigations on an intermediary’s own initiative, See Case C-324/09, L’Oréal SA v. eBay International, 2011, para. 122. 87 Case C-18/18, Glawischnig-Piesczek v. Facebook Ireland. 15 3.3 Information Society Directive (EU) 2001/29/EC Article 3(1) An important part of any intellectual property right is the absolute right to communicate or make available a work to the public. This right is reserved solely for the relevant rightsholder, and any disclosure without his or her consent is prohibited. Article 3(1) of the InfoSoc Directive therefore states that: (1) Member States shall provide authors with the exclusive right to authorize or prohibit any communication to the public of their works, by wire or wireless means, including the making available to the public of their works in such a way that members of the public may access them from a place and at a time individually chosen by them. 3.3.1 Acts of communication to the public As we will see in Chapter four, the notion of communications of to the public plays an important role in Article 17 DSMD. It can therefore not be excluded from this chapter. The CJEU has developed a complex factor analysis to determine whether an act of providing access to a work in the digital environment can be qualified as an act of communication to the public.88 In light of this thesis, the relevant question becomes whether platforms such as YouTube, which enable users to generate content, also communicate to the public. This would make such platforms directly liable outside of the scope of the Safe Harbor protection they enjoy. The CJEU interprets the notion of communication to the public broadly. The Pirate Bay for example, was found to communicate to the public by indexing and categorizing metadata for protected subject matter and offering a search engine.89 It thereby played an indispensable role in enabling users to find illegal content.90 Noteworthy is the fact that the Court did not require The Pirate Bay to be aware of specific infringing material. It only required a general knowledge that copyright infringing material is shared through the platform. 91 Obviously, this knowledge was present as the Pirate Bay has been designed to enable peer-to-peer sharing of content otherwise not available for free. If YouTube would fall outside of the scope of Article 14 ECD, the question remains whether YouTube communicates to the public, despite the fact that the platform works hard to block and filter infringing content. Additionally, YouTube and The Pirate Bay both have a fundamentally different purpose. Pending case C-682/18 YouTube deals with this question as well: “Does the operator of an internet video platform on which videos containing content protected by copyright are made publicly accessible by users without the consent of the rightsholders carry out an act of communication within the meaning of Article 3(1) of [Directive 2001/29]?”92 Advocate General Saugmandsgaard proposed that platform operators do not, in principle, carry out acts of communication to the public within the meaning of Article 3 of Directive 2001/29 and are therefore not directly liable for an infringement of that provision when their users illegally upload protected works. However, we will have to wait for the final verdict in order to have legal certainty. Until then, this question remaines unanswered. 3.4 Concluding Remarks YouTube, in principle, falls within the scope of Safe Harbor protection. Therefore, YouTube is not directly liable for content posted by it users as long as they meet the requirements as set forth in Article 14 ECD. However, where YouTube is an active host, it will lose the privilege of Safe Harbor protection and its role shall be assessed according to intermediary liability regimes on a national level. 88 Senftleben, M. (et al). (2017), p. 21; see, for example: Case C-466/12, Svensson; Case C-160/15, GS Media; Case C-527/15, Brein (Filmspeler); Case C-610/15, Brein (Pirate Bay). 89 Grisse, K. (2019), p. 890. 90 Case C-610/15, Stichting Brein v. Ziggo, 2017, para. 38-39. 91 Grisse, K. (2019), p. 890. 92 Case C-682/18 YouTube, para. 38. 16 Uncertainty remains about whether YouTube meets the requirement of neutrality. While arguments can be made for both sides, the CJEU has yet to form a definitive assessment. This ambiguity will dissolve after a final verdict in Case C-682/18 YouTube, in which Advocate General Saugmandsgaard has proposed that YouTube, in principle, (1) does not carry out communications to the public, and is therefore not directly liable for an infringement of Article 3 InfoSoc Directive when their users upload illegal content, and (2) can benefit from the exemption laid down in Article 14(1) ECD in respect of all liability that may result from the files that they store at the request of users of their platform. YouTube would therefore not be liable for content posted by its users, provided that the conditions laid down in Article 14 ECD are met. However, even neutral hosts are liable in some instances. Article 14(3), supported by Recital 48 ECD, leaves the possibility for national courts to require YouTube to terminate or prevent an infringement by means of administrative order. In turn, this is achieved by means of content recognition technology. Both the permissible scope of injunctive orders and duties of care may de facto oblige YouTube to filter future infringing content. This filtering obligation must: (a) not lead to the monitoring of all content; (b) be fair, proportionate and not excessively costly to implement; and (c) strike a fair balance between the right to intellectual property and the freedom to conduct a business, the right to protection of personal data and the freedom to receive or import information. Whenever YouTube would fall outside the scope of Safe Harbor protection, the question becomes whether YouTube is directly liable by means of communications to the public. This question has also been raised before the CJEU in Case C-682/18 YouTube and has yet to be answered. Chapter four will analyze the new legislative regime. Additionally, Chapter four aims to look for differences between the current situation as described in this chapter, and Article 17 DSMD. Furthermore, possible challenges Article 17 DSMD may pose to users’ fundamental rights, OCSSPs’ right to conduct a business and the participative web will be addressed. 17 Chapter 4: Direct platform liability under Directive (EU) 2019/790 In what ways does the new liability regime differ from the old situation, and what challenges does it pose? This chapter will focus on the explanation of platform obligations in light of the new liability regime. The core of the new Directive in this regard is Article 17 DSMD, in combination with Recitals 61 through 71. Additionally, any element that differs from either the ECD- or InfoSoc Directives will be elaborated on. Then, at the end of every subparagraph, differences in the DSMD will be used to identify challenges to either fundamental rights involved, existing (case) law and/or the participative web. Paragraphs in this chapter will follow the sequence of Chapter three. The first paragraph will explain the applicability of the DSMD in order to investigate to whom it may concern. After all, in order to comprehend the DSMD, it is important to understand to which organizations it applies. Next, paragraph two explains what the Directive aims to change regarding the notion of communications to the public, as it lays the basis for the new liability regime. Paragraph three aims to explain this liability, as well as several limitations thereon. Furthermore, Chapter four addresses exceptions on liability which, in some cases, exclude OCSSPs from liability. 4.1 Applicability of Article 17 Directive (EU) 2019/790 The DSM Directive applies to all works and other subject matter that are protected by national law in the field of copyright on or after 7 June 2021.93 Article 17 DSMD regulates Online Content-Sharing Service Providers (OCSSPs) with the aim to close the Value Gap.94 Whether a service provider is an OCSSP (and is bound by the DSMD) is answered on a case-by-case basis, taking into account a combination of elements, such as the audience of the service and the number of files of copyright-protected content uploaded by the users of the service.95 The DSM Directive focuses solely on Service Providers who store and give the public access to copyright protected content, which means that the scope is limited to hosting providers.96 Services covered by the Directive are services, “the main or one of the main purposes of which is to store and enable users to upload and share a large amount of copyright-protected content with the purpose of obtaining profit therefrom, either directly or indirectly, by organizing it and promoting it in order to attract a larger audience, including by categorizing it and using targeted promotion within it”.97 The terms organizing and promoting seem to suggest that Article 17 DSMD only applies to active UGC-platforms. YouTube categorizes and recommends content based on a user’s viewing history. Additionally, YouTube uses targeted promotion in the shape of personalized advertisements before enabling users to view certain videos. Consequently, it is undisputable that YouTube fits the description of an OCSSP as set forth in Article 2(6) DSMD. Differences For services qualifying as an OCSSP, the ECD and Safe Harbor legislation cease to exist in relation to copyright related material. The DSMD therefore is a Lex Specialis to Article 14 ECD, and in order to classify as an OCSSP, the service provider must: - Have a main purpose to earn a profit by storing and enabling users to upload and share a large amount of copyright-protected content (i.e. hosting providers); and - Organize and promote content (requirement of an active host). 93 Directive (EU) 2019/790, Article 16(1). 94 Ibid., Article 2(6), additionally, see: Quintais, J.P. (2019), p. 17. 95 Ibid., Recital 63. 96 Directive (EU) 2019/790 Article 2(6) & Recitals 61-63. 97 Directive (EU) 2019/790, Recital 62. 18 As has become clear, within the old legislative regime there was no consensus on whether YouTube played a passive role in order to enjoy Safe Harbor protection. The judgement in joint Cases C-682/18 & C-683/18, YouTube and Elsevier will answer this question in the near future. In contrast, the DSMD adds the requirement of an active host, in the sense that it organizes and promotes the content posted by its users. Consequently, YouTube loses Safe Harbor protection under the ECD in cases falling under the DSMD and thus concerning copyright infringements. The interpretation of an active role is now more in line with the L’Oréal v. eBay judgement, as the Court ruled that eBay (an online marketplace) played an active role by optimizing and promoting offers-for-sale on eBay.98 In any case, previous legislation explained in Chapter three (including L’Oréal v. eBay) is no longer relevant for YouTube in case of copyright-infringements, because they now fall within the new legislative regime. Note that for other forms of illegal content, such as defamatory content, the Safe Harbor regime continues to exist. Challenges First, critics argue that the requirement of a case-by-case study of whether a service provider is also an OCSSP, is presumably a way to avoid implicating noncommercial and less dominant providers.99 The DSMD seems to mainly focus on YouTube at the moment, as they can contribute in a substantial way in closing the Value Gap. At the same time, a case-by-case study sacrifices certainty for businesses. They are unable to conclude, on the basis of the text of Article 17 DSMD, whether they categorize as an OCSSP. I believe this to be undesirable, since the consequences of being excluded from Safe Harbor legislation are considerable. It remains to be seen in future Case Law what the exact definition of an OCSSP will be, and which companies do and do not fall within it. Until then, legal certainty is a long way off. Second, the DSMD seems to categorize OCSSPs, including YouTube, as active hosts. However, the ECD Safe Harbor is still applicable for purposes falling outside the scope of this Directive. 100 The DSMD therefore seems to suggest that YouTube, which organizes and promotes by definition, is an active host when it comes to copyright-protected content. With regards to all other infringing content, such as defamatory speech, it may still be neutral in order to enjoy protection of the ECD Safe Harbor. As we have seen in the previous chapter, there was already a lack of clarity regarding the definitions of neutral and active hosts. Depending on the final judgement in Cases C-682/18 & C-683/18, YouTube and Elsevier, the DSMD may have introduced legal uncertainty. After all, the Court may rule that YouTube is a neutral host, contrary to what the DSMD has established. 4.2 Acts of Communication to the Public Apart from introducing the definition of an OCSSP, the DSMD indicates that platforms which qualify as an OCSSP communicate to the public in cases concerning copyright infringements. Article 17(1) DSM asserts that Member States: “shall provide that an online content-sharing service provider performs an act of communication to the public or an act of making available to the public for the purposes of this Directive when it gives the public access to copyright-protected works or other protected subject matter uploaded by its users.” The DSMD therefore introduces an extended notion of communications to the public for OCSSPs, by equating the hosting, organization and promotion of copyright-protected content with a communication to the public.101 However, Article 17(3) DSMD ensures that the definition of communications to the public does not change in cases falling outside the scope of the Directive. The general definition of communications to the public is therefore not broadened, and both definitions will exist independently. 98 Case C-324/09, L’Oréal SA v. eBay International, para. 116. 99 Bridy, A. (2019), p. 352. 100 Directive (EU) 2019/790, Article 17(3). 101 See Frosio, G. (2019), p. 11; for a discussion of the nature of the right to communicate to the public within the DSMD, see: Husovec, M. & Quintais, J.P. (2019). 19 Differences As we have seen in paragraph 3.2.1, communications to the public were subjected to a complex factor-analysis within the ECD. It was unclear whether the hosting of UGC also constituted an act of communication to the public. Advocate General Saugmandsgaard proposed that platform operators do not, in principle, carry out acts of communication to the public within the meaning of Article 3 InfoSoc Directive, and are therefore not directly liable for an infringement of that provision when their users illegally upload protected works. The DSMD has solved this uncertain situation by equating operations of an OCSSP with a communication to the public. This lays the foundation for direct liability for OCSSPs such as YouTube whenever users upload copyright-infringing content. This also means that the complex factor analysis is still relevant for the assessment of whether an ISSP communicates to the public in cases not regarding copyright-protected content, and therefore falling within the scope of the ECD. The next paragraph covers the new liability regime as set forth in Article 17 DSMD. Both the licensing and filtering obligations will be explained, followed by a discussion of the differences and possible challenges they pose. 4.3 Liability and Limitations Article 17(3) DSMD expressly states that when an Online Content-Sharing Service Provider performs an act of communication to the public or an act of making available to the public under the conditions laid down in the DSMD, the limitation of liability established in Article 14(1) of Directive 2000/31/EC shall not apply to the situations covered by this Article. YouTube is therefore, in principle, liable for infringing copyright-related content posted by its users. There are however, two ways for OCSSPs to circumvent liability and to prevent otherwise infringing content from being blocked on their platform. 4.4 Licensing The first limitation on liability is found in the opportunity for service providers to obtain authorization from rightsholders in the form of a licensing agreement.102 A music licensing agreement is the mutual agreement between two parties that guarantees that one can use someone else's music for which they hold the rights. Licensing agreements must be fair and rightsholders cannot be forced to enter into one. This will ensure a new stream of fair remuneration for artists, because their negotiating position is being improved. Obtained licenses do not only exempt the service provider from direct liability, but also from liability arising from content uploaded by its users.103 Licensing therefore prevents both direct and indirect platform liability. Additionally, the DSM Directive enables voluntary or collective licensing, which might speed up the process. It enables OCSSPs to authorize large portions of works in one deal. As we will see, YouTube has to authorize a very large portion of works and doing so one-by-one might cost a substantial amount of work and resources. First, we will have a closer look at the differences with the old situation. Next, possible challenges will be addressed. 4.4.1 Differences Within Article 17 DSMD, licensing is the primary avenue to circumvent liability and to provide rightsholders with an improved negotiating position. The procedure is new, and can therefore not be compared with the E- Commerce- or InfoSoc Direcives. After all, one of the problems which led to the existence of a Value Gap was the fact that artists had a weak bargaining position. Under Article 17 DSMD YouTube might lose certain content which has not been licensed, causing its users to be unable to consume it. Now that OCSSPs are in principle subject to liability, and because YouTube provides artists with a large audience for their works, both parties are incentivized to agree on a licensing deal. 4.4.2 Challenges YouTube earns excessive amounts of money from illegal content on its platform and a large share should flow back to the artists. I therefore agree that copyright deserves more protection and licensing is an effective way to 102 Directive (EU) 2019/790, Article 17(1) second paragraph. 103 Ibid., Article 17(2). 20 do so. While at first glance the new licensing approach seems like a substantial contribution to the closing of the Value Gap, there is also critique on licensing obligations as the primary avenue to do so. It is not certain that the implementation thereof will be without its challenges. The first consequence of a licensing obligation is twofold. Platforms will have to invest heavily into licenses to be able to provide users with a diverse inventory of content. Additionally, content presumably has to be (temporarily) removed until a license has been granted. After all, a commercial business would never risk liability. This situation is a logical consequence of the compromise between artists and UGC-platforms, but may still be problematic. While I would not necessarily define this as a future challenge, the fact that content may be removed only in order to be re-uploaded after a licensing deal, seems disproportionally burdensome for OCSSPs. In some cases content will never be available to users again, which could affect the business model of platforms such as YouTube. There should be more effective ways to implement licenses without excessively disrupting UGC- platforms. Alternatively, some rightsholders might be willing to grant the relevant OCSSP a timeframe in which they can negotiate without being liable for the UGC in the meantime. Second, it may in some cases be hard to obtain licenses to works in the first place. Some critics expect something along the same line and compare the situation with a ‘mission impossible’ or ‘herculean task’.104 Others argue that it is extremely difficult, if not impossible, to obtain necessary licenses on all the works uploaded by users.105 The reason for this is twofold. First of all, many works have multiple rightsholders and rights may therefore overlap. For example, a song (i.e. the melody, chord progression, lyrics, etc.) may be copyrighted, but every official recording of that song contains an extra copyright: the sound recording copyright. For video, even more rights may overlap, increasing the difficulty of obtaining every necessary license.106 This is problematic, because failing to license just one of the overlapping rights will result in primary liability for the OCSSP. Additionally, while OCSSPs can reach out to rightsholders and their respective collecting societies themselves, some rightsholders remain unknown.107 This possibility is caused by a lack of national (or supranational) database for copyright protected works. This has also been brought up by J. Reda before the United States Senate Commission. 108 Only when rightsholders approach the OCSSP, will they be able to ascertain whether or not they can close a licensing deal. Otherwise, with regards to works from unknown rightsholders, OCSSPs are left with the option to either risk liability, or to sacrifice its users’ right to freedom of speech and/or information by blocking the content altogether. As mentioned earlier, my expectation of any commercial entity is that they would block the content in order to circumvent liability. Of course, it may also be possible that OCSSPs aim to retain as much users as possible. By doing so, ad revenues might compensate them for possible liability claims in the future. While unknown rightsholders are therefore not necessarily a challenge, it is a situation that will have a negative impact on one of the parties involved. Whether this will be users who have less content at their disposal, or the platform that is liable for some content, the future will tell. In my view, acquiring licenses for all works that could be uploaded by users, for all rights involved, and from all relevant rightsholders, could therefore prove too burdensome for OCSSPs in practice. Granted, there are exceptions for smaller entities, but those are only temporal and at a certain point in time they will too have to adhere to the full obligations of Article 17 DSMD. This viewpoint is strengthened by the fact that the most obvious alternative (blocking content in order to circumvent liability) comes at the cost of platform users’ right to freedom of information and the participative web. However, Article 17(4) DSMD, which comes into play when an OCSSP has failed to conclude a licensing deal, might ensure that the relevant OCSSP is not liable. As we will see, one of the cumulative requirements is that best 104 See, in this regard: Senftleben, M. (2020); Samuelson, P. (2020); Dusollier, S. (2020), p. 1001. 105 Angelopoulos, C. & Quintais, J.P. (2019), p. 147 – 148; Dusollier, S. (2020), p. 1014; Senftleben, M. (2018). 106 Apart from the copyright(s) on the video itself, there may be rights of phonogram, performer(s) and/or film producers. 107 Grisse, K. (2019), p. 893. 108 Reda, J. (2020). 21 efforts have been made to obtain a license. Whenever it is impossible to obtain a license, these best efforts suffice. Additionally, they must also make best efforts to ensure the unavailability of specific works and act expeditiously subsequent to notice from rightsholders. In this regard the authorization avenue is only one of the cumulative requirements as set forth in Article 17(4) DSMD, and it may prove to be a difficult one in many occasions. After all, proving that best efforts were made may prove difficult in practice, and the future will show how this concept is interpreted. Paragraph 4.5. will elaborate on the second obligation: upload filters. 4.5 Upload Filters OCSSPs failing to conclude a licensing agreement are directly liable for unauthorized acts of communication to the public, including making available to the public of copyright-protected works. Article 17(4) DSMD provides an alternative way to avoid liability, by confirming that the OCSSP without a licensing agreement is directly liable, unless they have: (1) made best efforts to obtain an authorization; (2) made best efforts to ensure the unavailability of specific works for which the rightsholders have provided them with the relevant and necessary information; and (3) acted expeditiously, subsequent to notice from rightsholders, to take down infringing content and made best efforts to prevent its future upload.109 Prerequisite (3) embodies the notice-and-takedown procedure we have seen in Chapter three. Also, it obliges OCSSPs to prevent future uploads of the same content: a notice-and-staydown procedure. Additionally, the most intrusive requirement (2) encourages the service provider to block future content for which rightsholders have provided them with relevant information. As we have seen, this is achieved through Content ID. There is general agreement in literature that the best efforts requirement equals the adoption of upload filters.110 Possible negative effects of an upload filter will be elaborated on in paragraph 4.5.2. Instead of abandoning the ban on general monitoring, Article 17(8) DSMD contains a further manifestation of the overarching principle expressed in Article 15(1) ECD: an unspecific, general monitoring obligation would be excessive and incompatible with EU law.111 Article 17(8) DSMD states that the application of the new liability system ‘shall not lead to any general monitoring obligation’. The question therefore arises whether general monitoring corresponds to the term used in Article 15 ECD, and if not, whether the prohibition of Article 15 ECD has survived at all.112 The role of Article 15 ECD within the DSMD will be further elaborated on in the challenges section of this paragraph. Furthermore, Article 17(5) DSMD states that the following elements, combined with the principle of proportionality, shall be taken into account while determining whether the service provider has complied with its obligations under paragraph 4: (a) the type, the audience and the size of the service and the type of works or other subject matter uploaded by the users of the service; and (b) the availability of suitable and effective means and their cost for service providers.113 Under (a), certain types of platforms are excluded from obligations following from Article 17(4) DSMD, by looking at the type of content they offer as well as the size of their audience. The principle of proportionality in combination with the availability of suitable and effective means, as well as their cost for the service provider (b), could also lead to exclusion from (some) obligations. Still, we would have to see how national law and courts will implement these factors, and how big of a role they will play. Arguably, a serious consideration of the principle of proportionality in this context might ensure that obligations for OCSSPs are too onerous. However, the question 109 Directive (EU) 2019/79, Article 17(4), Recital 66. 110 Moreno, F. (2020), p. 154; Burri, M. & Zihlmann, Z. (2021), p. 14 – 15. 111 Senftleben, M. & Angelopoulos, C. (2020), p. 24. 112 Senftleben, M. & Angelopoulos, C. (2020), p. 7. 113 Directive (EU) 2019/79, Article 17(5). 22 of whether an obligation to comply with the best efforts is disproportionate will depend on the interpretation of the role of proportionality in this context. Additionally, it is important know when best efforts have been made, and therefore how this will be implemented. As we will see, this might also present a challenge in the new legal framework, especially because OCSSPs are directly liable in cases where they have not made best efforts. The Commission, in cooperation with Member States, shall ensure stakeholder dialogue to discuss best practices for cooperation between OCSSPs and rightsholders. The Commission shall, based on the dialogues, issue guidance on the application of Article 17 DSMD, in particular regarding the cooperation referred to in paragraph 4 thereof. When discussing best practices, special account shall be taken, among other things, of the need to balance fundamental rights and of the use of exceptions and limitations. 114 Before addressing critique in further detail, I will explain the main differences between the DSMD and the old legislative regime regarding upload filters. 4.5.1 Differences Traditionally, under the ECD and InfoSoc Directives, the starting point were the Notice and Takedown- and Notice and Staydown procedures, to the extent that sufficiently specific information had been provided by the entitled party. Under the old legislative regime, ISSPs were in principle not liable for content uploaded by their users. Consequently, the implementation of upload filters was voluntary, and based on the freedom to conduct their business. The DSMD however, introduces a procedure which I would define as ‘takedown, unless’. This shows that the DSMD operates from another presumption: OCSSPs are in principle directly liable for infringing content uploaded by its users. Additionally, while Article 17 DSMD does not explicitly oblige OCSSPs to implement content recognition technology (CRT), this will de facto be the case. The implementation of an upload filter however, is not new. As we have seen in Chapter three, the CJEU has already allowed filters to be used in order to block specific infringing content. This obligation could be used both for existing content, as well as future uploads. In that sense, the filtering obligation is an acknowledgement of existing case law. However, this does not mean it comes without possible negative effects, which will be addressed in the next paragraph. 4.5.2 Challenges Content Recognition Technology: Content ID Specific filtering obligations, as we have seen in Chapter three, imply that OCSSPs can be obliged to remove or block content equivalent to a reference file in the fingerprint database, only insofar as they are able to do so by automated means.115 Automated filtering is therefore a necessary consequence of Article 17 DSMD. The effectiveness of CRT varies from case to case, as some content is identical to an entry in the fingerprint database, whereas some is of an equal nature. With regards to content identical to reference files, upload filters will most likely be an appropriate solution. The impact assessment made by the Commission Staff Working Document is in line with this opinion.116 However, Content ID might prove problematic when used to detect equivalent content. In this case, the database does not include an exact copy of the reference file and therefore does not automatically show a match. Content ID has to take it a step further by classifying content in order to determine whether it may be equivalent to a reference file. 117 Content ID has primarily enabled ISSPs to recognize infringing content in relation to the notice and takedown procedure as explained in Chapter three. It has not been used to recognize the context in which content has been posted. As we will see, content ID is not fit for this purpose. The question therefore becomes whether using Content ID to classify content may introduce challenges, and in what ways. 114 Directive (EU) 2019/79, Article 17 paragraph 10. 115 Kuczerawy, A & Rauchegger, C. (2020), p. 1513. 116 European Commission, Commission Staff Working Document: Impact Assessment on the modernization of EU copyright rules, Part 3/3, SWD(2016)301 final, at p. 164-165. 117 Gorwa et al. (2020), p. 3. 23 First, if similarities have been found, CRT is unable to assess whether they amount to a copyright infringement or not.118 In some cases legitimate content will be interpreted as infringing, while in reality it should not have been. These so called false positives are then removed and users no longer have access thereto.119 Content recognition technology is unable to avoid false positives.120 Second, classification of content could be problematic with regards to exceptions as described in Article 17(7) DSMD. The lack of contextual nuance might cause filters to misinterpret content which falls within one of the exceptions, such as pastiche or caricature. After all, UGC often contains creative uses of existing protected works.121 It is therefore hard to imagine Content ID being able to separate a legitimate parodic use from a prohibited quotation without viewing a YouTube video in the relevant context. Again, this most likely leads to excessive blocking of content. Third, automated decisions made through CRT are inherently not transparent. Especially in content moderation and from a users’ perspective, the specific criteria by which those decisions are made remain unknown. 122 This might be strengthened by the fact that YouTube does not want to sell or share their Content ID technology. In any case, filtering technologies are not as advanced as the ECJ might have hoped, and are unable to fulfill the obligations from Article 17 DSMD in an automated manner. 123 Human monitoring will continue to play an essential role in the contextualization of UGC, especially regarding equal content, or content posted in a particular context which should be recognized as an exception. Prohibition on general monitoring obligations It has become clear that the DSMD obliges OCSSPs to detect illegal content by implementing filtering technology. As this would require the monitoring of all content, one might ask how a general monitoring obligation is defined, and whether Article 17 DSMD achieves to prevent this. In my view this cannot be achieved by a mere statement, like in Article 17(8) DSMD: The application of this Article shall not lead to any general monitoring obligation. In my view, by using the same term as Article 15 ECD, the Commission has left no doubt that the prohibition is also applicable to OCSSPs. Additionally, Article 1(2) DSMD states that the directive “shall leave intact and shall in no way affect existing rules laid down in the directives currently in force in this area.” The statement in Article 17(8) DSMD should therefore have been elucidated in the text of the directive more clearly. However, a possible explanation might be that the Commission is referring to voluntary cooperation by OCSSPs. The voluntary basis will make incompatibility with Article 15 ECD difficult to construe, as Article 17 DSMD would de facto impose a general monitoring obligation on them, but not ex lege.124 It would therefore not classify as an obligation per se. Still, it is worth exploring the risk of general monitoring, as OCSSPs are left with substantial legal uncertainty in this regard. I will therefore explore several views on this matter, before concluding on whether Article 17 DSMD constitutes a (forbidden) general monitoring obligation. For the purpose of this thesis, I will assume that the definition of prohibition on general monitoring obligations is equal to that of Article 15 ECD. However, future case law could show whether or not the prohibition may be seen as a lex specialis, separate from the legislative regime of the ECD.125 First, algorithmically blocking specific content automatically implies monitoring of all content. The DSMD will therefore force YouTube to compare each upload to a database of fingerprints, a situation which seems to 118 Engstrom, & Feamster, (2017), p. 64 119 See, for example: Depoorter, B. & Kirk Walker, R. (2013); Spoerri, T. (2019). 120 Geiger, C. & Justin Jütte, B. (2021), p. 35. 121 Ibid, p. 36. 122 Ibid, p. 11. 123 See, for example: Spoerri, T. (2019). 124 Frosio, G. (2019), p. 15. 125 In this regard, see: Senftleben, M. & Angelopoulos, C. (2020), p. 24 – 27. 24 correspond to most definitions of the word ‘general’.126 Some argue that there is simply no way that the monitoring of all content can be a specific monitoring obligation. To them, filtering obligations are incompatible with the general monitoring prohibition, even if they concern a specific, pre-identified right. This is generally known as the ‘basic’ interpretation of Article 15 ECD. 127 This interpretation was embraced by the CJEU in the L’Oréal v. eBay and SABAM Cases, which have been addressed in Chapter three. Second, the ‘basic minus’ interpretation leaves room for the imposition of monitoring obligations concerning all or most of the information handled by the intermediary, if this general monitoring is carried out in search of infringements of a specific right.128 Proponents of this definition do not necessarily equate monitoring of all content to a general monitoring obligation. In this line of reasoning, specific refers to the content which the intermediary is trying to identify. It does not matter how much of the total uploads are being examined, as long as there is a specific file against which the content is compared. In turn, the basic minus interpretation can be split in two trends. Proponents of ‘basic single minus’ argue that all court-ordered monitoring amounts to specific monitoring.129 Here it is irrelevant what content is being filtered, or how much. As long as the filtering obligation is ordered by a court, it will characterize as specific and is therefore not prohibited. ‘Basic double minus’, on the other hand, permits the monitoring of all content only when it is ordered by a court, and addresses a specific illegality which has been brought to the intermediaries attention beforehand. In other words, it adds the requirement of pre-identified illegal content, irrespective of the source.130 In the basic double minus interpretation, the phrase specific seems to refer to the fact that a court-order can only oblige an OCSSP to filter and block certain pre-identified content. The Glawischnig-Piesczek v. Facebook Case, which concerned ISSPs, is also useful in the context of OCSSPs because it deals with the question of what can be considered a specific filtering obligation and a (prohibited) general filtering obligation. According to the Court, OCSSPs may be ordered to block both equivalent and identical content, no matter if it originates from the same user or not. Additionally, a specific monitoring obligation forcing the OCSSP to filter all content on its platform, is not prohibited.131 This marks a shift towards the basic single minus interpretation and indicates the trend towards more responsibility for online intermediaries regarding infringing content. The definition of Article 15 ECD has become more flexible and does permit a specific filtering obligation regarding identical or equivalent content. The Court further noted that specific content must be properly identified in the injunction, and OCSSPs may not be forced to carry out an independent assessment of the content. 132 This may mean that the obligations following from Article 17 DSMD, in principle, may be compatible with the general monitoring prohibition. However, insufficiently specified injunctions may cause the prohibition to tilt towards a general monitoring obligation. The same goes for cases in which an independent assessment must be made, and/or when it is unclear whether content is equal. Finally, I would like to remark that it is yet to be construed how national Courts should define term equivalent content in injunctions. In turn, this might lead to disputes about the classification of content as equivalent to content that has previously been held to be illegal. In other words, should a national court draw up a list of equivalent content, or specify what it is that made the original content illegal? This ambiguity in the obligation of national courts will have to be solved by OCSSPs. They will then either block content which was not equal at all, or become liable for content which, in retrospect, turns out to be equal and thus illegal. 126 See, for example: Bridy, A. (2019), p. 354 – 355. 127 Angelopoulos, C. & Senftleben, M. (2021), p. 8. 128 Ibid. 129 See, for example, Spindler, G. (2020). 130 Angelopoulos, C. & Senftleben, M. (2021), p. 9. 131 Kuczerawy, A & Rauchegger, C. (2020), p. 1504. 132 Case C-18/18, Glawischnig-Piesczek v. Facebook Ireland, para. 45. 25 This paragraph has shown that the boundary between general and specific filtering obligations remains unclear. One can argue about the definition, the boundary may change over time due to future case law and the boundary may be different between service providers. However, the general trend seems to point to a more inclusive interpretation of permitted specific monitoring. 4.6 Challenges to Human Rights In its core, Article 17 DSMD seeks to solve the Value Gap by seeking a fairer balance between copyright protection, the unlicensed exploitation of copyrighted works, and fundamental rights of users. This paragraph will focus on challenges that may arise from mandatory upload filters regarding fundamental rights involved. 4.6.1 Freedom of Expression and Information Case C-401/19 Poland v. Parliament and Council is a suitable illustration for the problems Article 17 DSMD poses to the freedom of expression and information of users. It is a referral which came from the Republic of Poland, seeking to annul Article 17(4)(b)(c) DSMD.133 At the heart of this Case lies the question of how OCSSPs can fulfill their Article 17 DSMD obligations without encroaching upon fundamental rights of its users. Poland primarily claims that Article 17 DSMD infringes the right to freedom of expression and information guaranteed by Article 11 of the Charter of Fundamental Rights of the European Union.134 On the one hand, there are those who believe that all content which corresponds with, or closely resembles, a reference file must be preventively blocked. On the other hand we have the opponents, which believe that the preventive blocking of alleged infringing material can potentially negate fundamental rights by over-blocking content.135 OCSSPs such as YouTube will find itself in the middle of this dispute, and the line between infringing works and legitimate user-generated content can be very thin.136 Over-blocking encroaches upon users’ fundamental rights, while not preventively blocking might cause direct liability for OCSSPs and cause financial harm to rightsholders. I believe that OCSSPs have an economic incentive not to take risks when it comes to blocking infringing content and subsequent liability. After all, companies are incentivized to maximize profit. In order to save resources, they will likely under-invest in the proper execution of the tasks following from Article 17 DSMD. 137 Additionally, OCSSPs do not face direct risks for infringing human rights and do not have to live up to the same standards as state actors. This is known as the delegation trade-off.138 In other words, lack of compliance is punishable, but over-blocking is not. Consequently, for OCSSPs, the most straightforward way to circumvent liability is to block all content which closely resembles a reference file. As we have also seen in paragraph 4.5.2, there is a real threat that YouTube will over-block content at the cost of its users fundamental right to freedom of expression and information. Senftleben rightly emphasizes that “this approach entails a remarkable transformation of the function of copyright law. It becomes a central basis for content censorship in the online world”.139 After all, the filtering obligation de facto introduces a barrier to the flow of free content of which OCSSPs were unable to authorize the use. As we have seen, there are several reasons this may realistically happen to a significant amount of content. Admittedly, this is what copyright was originally intended to do. However, as a result of Article 17 DSMD, the enforcement of copyright has shifted towards private actors. With regards to the challenges to users’ fundamental rights, this is troublesome. OCSSPs are not as independent and impartial as a judge. Therefore it is not justified to make them balance requests from copyright owners, the sanction of liability and the demands of users for 133 See, for example: Angelopoulos, C. & Quintais, J.P. (2019); Bridy, A. (2019); Senftleben, M. (2019); Senftleben, M. (et al). (October 17, 2017); Gann, A. & Abecassis, D. (June 2018); O’Brien, D. & Malcolm, J. (June 12 2018); Cerf, V. (et al.) (12 June 2018). 134 Centrum Cyfrowe Foundation. (September 2019); Centrum Cyfrowe supports openness and engagement in the digital world by making the world more inclusive, more cooperative and more open. They change the way people learn, participate in culture, use the internet and exercise their rights as internet users. 135 See, for example: Geiger, C. (February 2021). 136 Dusollier, S. (2020), p. 1018. 137 Husovec, M. (2021), p. 3. 138 Ibid. 139 Senftleben, M. (2019), p. 5. 26 leaving their content on the platform. In the end, platforms will choose what is best for them.140 I therefore expect that OCSSPs cannot fulfill their Article 17 DSMD obligations without encroaching upon fundamental rights of its users. The opinion in Case C-401/19 Poland v. Parliament and Council, originally scheduled for the 22nd of April 2021, has been postponed to a later date which is still to be announced. The final verdict will bring more clarity. In an attempt to counter this challenge, the European Commission included Article 17(7) DSMD which states that: “The cooperation between OCSSPs and rightsholders shall not result in the prevention of the availability of works or other subject matter uploaded by users, which do not infringe copyright and related rights, including where such works or other subject matter are covered by an exception or limitation.” However, I anticipate that the exception will not be able to resolve the challenge signaled above. After all, OCSSPs will never risk liability while investigating whether content is infringing or not. Another relevant topic in this regard is the tension between Article 17(4) and (7) DSMD, which will be addressed in the next paragraph. Results vs best efforts – Article 17(4)&(7) There is a tension between Article 17(4) (best efforts obligation) and 17(7) DSMD (results obligation). Paragraph four imposes platform liability when an OCSSP fails to comply. As we have seen, an OCSSP is therefore most likely to over-block content, both knowingly (to avoid liability) and unknowingly (as a result of automated CRT). Paragraph seven on the other hand, ensures that OCSSPs prevent over-blocking. One might argue that the risk of over-blocking can be reduced by keeping content available on platforms while it is under review in accordance with the complaint and redress mechanism ex Article 17(9) DSMD. This would decrease the risk of infringements upon fundamental rights of users. However, harm to rightsholders can also be inflicted by keeping infringing content temporarily available on a platform. YouTube videos, for example, can have a significant number of views within a couple of hours, resulting in economic damages for the rightsholder. In light of the main goal of the DSMD, to strengthen the position of rightsholders against OCSSPs, one could argue that if it is unclear whether a video infringes a rightsholders copyright, an OCSSP will block it until the content is reviewed. After all, reviews would merely constitute a “temporal inconvenience” that is justified given the purported overall objective of the directive to strengthen the position of rightsholders vis-a-vis platforms.141 Article 17 DSMD incentivizes OCSSPs to prioritize copyright protection over the right to information and freedom of expression. At the same time, OCSSPs are obliged to balance fundamental rights.142 Article 17 DSMD therefore fails to establish a fair balance between the rights involved, which might cause Member States to implement the Directive in different ways. In turn, legal certainty and uniformity are absent. Additionally, restriction of the rights under Article 11 and 13 EU Charter of Fundamental Rights (EUCFR) by preventing lawful uploads is generally not justified.143 The reason for this will be illustrated by means of an example. In the Case of Yildirim v. Turkey, the Court ruled that “any preventive measure that restricts the right to freedom of expression, although not in principle irreconcilable with Article 10 […], is inconceivable without a framework establishing precise and specific rules regarding the application of preventive restrictions on freedom of expression”.144 The same reasoning should be applied to Article 17 DSMD and the possibility of over-blocking. The Court further considers it a necessity to balance the restrictive measure with fundamental rights involved, which must also be reflected in the legal basis that permits the restriction of the right to freedom of expression.145 140 Dusollier, S. (2020), p. 1016 – 1017. 141 Keller, P. (November 11 2020). 142 Directive (EU) 2019/79, Recital 70. 143 Geiger, C. & Justin Jütte, B. (2021), p. 43. 144 ECtHR (Chamber), 18 December 2012, case of Yildirim v. Turkey, Appl. no. 3111/10, para. 64. 145 Ibid., para. 66. 27 In my view, Article 17 DSMD lacks such a reflection in the form of a passage which gives precise and clear rules governing the restriction. The formulation of Article 17 DSMD therefore constitutes an unjustified restriction to Articles 11 and 13 EUCFR. The text does not immediately clarify the fact that there is a hierarchy between the rights of users and rightsholders, nor does it provide a framework establishing precise and specific rules regarding the application of preventive restrictions on freedom of expression. This might be cause for concern, especially because private actors are engaged in the act of balancing fundamental rights involved. The next paragraph will look at possible implications on the freedom to conduct a business in more detail. 4.6.2 Freedom to Conduct a Business Automated filtering may also affect the freedom to conduct a business. We have seen that OCSSPs are unable to separate infringing and legal content strictly by automated means, especially in the case of equivalent content. OCSSPs must therefore implement, apart from costly CRT, a contextual analysis which cannot be fully automated. This will of course require significant investments in order to be fully implemented at scale, as the analysis would be made by human beings on a case-by-case basis. In fact, two hundred forty EU-based online companies explained in a letter to the European Parliament that small and medium-sized enterprises cannot afford to develop or deploy automated content recognition technologies and take on related expenses.146 In addition, Google has reportedly spent about $100 million on the development of Content ID,147 but it does not license that technology to external parties. Firms such as Audible Magic license their CRT,148 but Article 17 DSMD may cause prices to rise significantly as a result of market forces. After all, the shortage of supply of CRT technologies will inevitably strengthen the bargaining position of those who achieved to develop it. As a consequence, small and mid-sized OCSSPs might realistically not have direct access to CRT and are therefore unable to comply with the best-efforts requirement of Article 17 DSMD. 149 It will be interesting to see whether the definition of the best efforts requirement might be interpreted differently for those service providers.150 Additionally, users must be provided with an effective and expeditious complaint and redress mechanism that is available to users of their services in the event of disputes over the disabling of access to content.151 This means that even after the initial investment in contextual analyses, an OCSSP has to invest in order to process user complaints without undue delay. The fact that copyright exceptions and limitations are not substantively harmonized within the EU ensures that there are also substantial information costs. The (il)legality of content may differ between countries.152 It remains to be seen just how much users would actually use this mechanism. Still, the possibility that a large part of users would file a complaint introduces a challenge to the freedom to conduct a business, as the obligations may turn out to be disproportionally expensive. Luckily, Article 17 DSMD frees some OCSSPs from liability, temporarily reducing their obligations significantly. Section 4.7 discusses these exceptions in more detail. 4.7 Exceptions Article 17 DSMD introduces several exceptions from liability. First, Article 17(6) exempts smaller platforms from liability, based on the period for which they have existed and their annual turnover. It states that for OCSSPs, (1) the services of which have been available to the public in the Union for less than three years, (2) have an annual turnover below EUR 10 million and (3) do not exceed a monthly unique userbase of less than five million, the 146 See Poortvliet, J. (March 19 2019). 147 See Bridy, A. (2020), p. 48. 148 See Ibid. 149 See, for example: Spoerri, T. (2019). 150 I would like to note that small and mid-sized OCSSPs in this context are the ones falling outside of the exception of Article 17(6) DSMD. OCSSPs which have been available to the public in the Union for less than three years, have an annual turnover below EUR 10 million and do not exceed a monthly unique userbase of less than five million do not have a monitoring obligation. This exception will be elaborated on in paragraph 4.6. 151 Directive (EU) 2019/79, Article 17(9). 152 Geiger, C. & Justin Jütte, B. (2021), p. 42. 28 conditions set out in Article 17(4) are limited. These smaller OCSSPs do not have to comply with requirement (b) thereof. Consequently, smaller OCSSPs must make best efforts to obtain an authorization and act expeditiously, upon receiving a sufficiently substantiated notice, to disable access to the notified works or other subject matter or to remove those works or other subject matter from their websites.153 However, they do not have a filtering obligation in order to comply with the notice-and-staydown procedure. Essentially, the notice-and-takedown procedure remains intact for smaller platforms, which means they do not have to implement upload filters. Of course, this also mitigates some challenges as highlighted in the previous paragraph for small platforms. Still, failing to satisfy any one of the criteria above means that upload filters immediately have to be implemented. Consequently, any platform which aims to expand its business will have to implement CRT eventually. Second, Member States must ensure that UGC-platforms are able to rely on any of the following exceptions or limitations when making available to the public content generated by users on their services: (a) quotation, criticism, review; or (b) use for the purpose of caricature, parody or pastiche.154 This exception reflects the fundamental rights of users of UGC-platforms, which have been implemented in national law and Case Law in the past. Recital 70 specifically mentions the right to freedom of expression and the freedom of the arts. These should be weighed against the right to intellectual property. At the same time, online freedom of speech is one of the main pillars of the web 2.0, as it encourages people to interact with others from anywhere on the globe. 4.7.2 Challenges Preventive measures “shall not result in the prevention of the availability of works or other subject matter uploaded by users, which do not infringe copyright.”155 In this light, Member States must ensure the exceptions as mentioned above. However, while this exception could indeed prevent lawful content from being removed, it also forms the basis of a substantial amount of criticism.156 Arguably, Content ID is not capable of recognizing the context of posts, and will therefore block content which would otherwise have been deemed lawful. Article 17 DSMD could thus result in a breach of fundamental rights, which is contrary to what the Directive aims to achieve.157 The DSMD addressed this problem in Article 17(9), by requiring OCSSPs to put in place an effective and expeditious complaint and redress mechanism that is available to users of their services in the event of disputes over the removal of content. While this is a valuable addition, it does not prevent content from (temporarily) being removed, and therefore temporarily encroaching upon users’ fundamental rights. Content ID simply is unable to automatically decide whether certain posts are made in a context resembling one of the exceptions. To avoid repetition, and because possible implications regarding exceptions are rooted in the same problem, I refer back to paragraph 4.5.2 for a more in-depth analysis of possible challenges. 4.7 Concluding remarks In what ways does the new liability regime differ from the old situation, and what challenges does it pose? Legal Uncertainty 153 Directive (EU) 2019/79, Article 17(6). 154 Ibid., Article 17(7). 155 Directive (EU) 2019/79, Article 17(7). 156 See, for example: Bridy, A. (2019); Keller, P. (2020); Gann, A. & Abecassis, D. (2018). 157 Directive (EU) 2019/79, Recitals 70 & 80. 29 First of all, Article 17 DSMD has brought legal uncertainty. While I would not necessarily define this as a challenge resulting from the DSMD, it is worth mentioning because it will be a direct result of Article 17 DSMD. First, there is legal uncertainty regarding the definition of an OCSSP as a result of the case-by-case basis on which they are categorized. It is unclear which platforms will, and which will not, fall within the DSMD. Second, depending on the final verdict in Cases C-682/18 & C-683/18, YouTube and Elsevier, there may be a difference in interpretation of YouTube as an active host in cases falling within the DSMD, and a neutral host in cases falling outside of its scope. Of course Case Law and EU Guidelines will reduce uncertainty in due time. Challenges to Fundamental Rights and the Participative Web Licensing The implementation of licensing obligations is too onerous because (1) it requires an extreme amount of expenses to be incurred at one time, and (2) content must be temporarily removed and can only be reinstated once a license is in place. There should be more effective ways to implement effective licenses without excessively disrupting UGC-platforms. In some cases it is hard to obtain a license in the first place. Some works have overlapping rights, all of which have to be licensed. Even more problematic is the fact that some rightsholders remain unknown. There is no way to contact copyright owners to get authorization for content that can only be identified by putting it online, which will be infringing if no license precedes that making available. Licensing could therefore prove too burdensome. In those cases the relevant OCSSP will be directly liable as a result. Consequently, UGC will become less varied as OCSSPs will start to block. This outcome is undesirable in light of both a fundamental rights perspective and the preservation of the open, participative web. Upload Filters Article 17(4) DSMD provides OCSSPs who have not been able to obtain a licensing agreement with an alternative route in order to prevent direct liability. However, Article 17(4) DSMD introduces several challenges as well, mostly resulting from mandatory upload filters. First, it remains unclear what the best efforts requirement means in the case of failed authorization attempts. What if OCSSPs were unable to find the relevant rightsholder because they are unknown? Content ID is not suitable for classifying content equal to a reference file. As a result, Content ID will cause so- called false positives. The lack of contextual nuance might also cause upload filters to block content that would normally fall under an exception of Article 17(7) DSMD: over-blocking. Additionally, decisions made through automated filtering by private enforcement by OCSSPs are inherently not transparent. It is untraceable what content is being removed, under which prerequisites, and therefore whether the removal is legitimate. General Monitoring Obligations Article 17 DSMD fails to explain why it shall not lead to a general monitoring obligation. We have seen that best efforts effectively require an OCSSP to implement CRT. It seems as if OCSSPs will de facto be obliged to implement upload filters and monitor all content. Member States will have to ensure this. Personally, I do not expect the new situation to be incompatible with Article 15 ECD, because the general trend regarding the prohibition on general monitoring obligations is shifting towards a more inclusive notion of a specific filtering obligation. Still, insufficiently specified injunctions may cause the prohibition to tilt towards a general monitoring obligation. In this regard it remains to be seen how national courts should specify equal content, and in what form. Instances in which an independent assessment must be made may also lead to a prohibited general monitoring obligation. Fundamental Rights Filtering obligations de facto introduce a barrier to the free flow of content of which OCSSPs were unable to authorize the use. Over-blocking, caused by the delegation trade-off, is a realistic prospect and will lead to breaches of users’ fundamental rights. OCSSPs can therefore not fulfill their Article 17 DSMD obligations without violating fundamental rights of its users. Additionally, the restriction of Articles 11 and 13 EUCFR by preventing 30 lawful uploads is generally not justified. Article 17 DSMD lacks a framework establishing precise and specific rules regarding the application of preventive restrictions on freedom of expression. Moreover, Article 17 DSMD does not reflect that it balances the restrictive measures with fundamental rights involved. In an attempt to reduce the unavailability of legitimate works, Article 17(7) aims to protect exceptions like quotation and pastiche. However, yet again, the DSMD gives no guidelines regarding the implementation of such mechanism. According to Article 17(9) DSMD, users have the possibility to use a complaint and redress mechanism to assess whether alleged infringing content should indeed be blocked. However, Article 17 DSMD does not explain how this mechanism achieves a fair balance between both interests. Therefore OCSSPs are forced to create a balance, but are incentivized by the overall goal of the DSMD to prioritize copyright protection over the freedom of speech and information. As a result of high developing costs and market forces, small to medium sized OCSSPs excluded from the exception of Article 17(6) DSMD will likely be unable to fully implement CRT technology. Additionally, platforms have to invest significantly to process user complaints without undue delay. Hence, the freedom to conduct a business is restricted, especially for small and medium OCSSPs which do not (yet) have a giant revenue stream. There should be ways to (partly) circumvent some of the challenges signaled above. The next Chapter therefore aims to look at alternative solutions to those challenges. At the same time, any alternative must still ensure that the Value Gap will disappear in the near future. 31 Chapter 5: Alternative solutions to the Value Gap Chapter four has highlighted a number of challenges arising from Article 17 DSMD. The importance of closing the Value Gap should not negate the fact that fundamental rights and the participative web must also be respected. In this regard, some obligations in their current form may lead to disproportionate and undesirable consequences. Therefore, this chapter aims to search for possible solutions or improvements, without compromising the positive effects for the Music Industry. Even minor adjustments can remove certain challenges and improve the overall balance between interests involved. Ultimately, this chapter will answer the following research question: In what ways, if at all, could the challenges identified in chapter four be addressed? 5.1 Legal uncertainty Legal uncertainty leads to a situation in which stakeholders cannot foresee how certain elements will develop after the implementation of Article 17 DSMD. Although this is an undesirable situation to be in before the implementation of any new regulation, I would not necessarily describe it as a challenge. Legal uncertainty can only be resolved retrospectively by removing ambiguities one by one. Slowly but surely, a whole will emerge where all the elements are in tune. Sometimes it is simply impossible to foresee all pitfalls in advance. The ECJ will play a big role in the interpretation of the definition of an OCSSP, the difference between a neutral and active host, the interpretation of communications to the public and the definition of general monitoring obligations. In this regard, the DSMD has provided a way to collectively discuss and solve subjects about which legal uncertainty exists. Stakeholder dialogue allows parties involved to discuss best practices for cooperation between OCSSPs and rightsholders. Based on those dialogues, the European Commission shall issue guidance on the application of Article 17 DSMD. In my view, this solves the legal uncertainty we have signaled in Chapter Four in due time. This also enables stakeholders to influence future decisions which affect them. Stakeholder dialogues can then be used to solve uncertainty regarding the definition of an OCSSP, the lack of clarity between neutral and active hosts, the interpretation of communications to the public and the definition of a general monitoring obligation. 5.2 Licensing Perhaps there could be a simpler way to require platforms to pay for video-sharing. This could be done, for example, through a remuneration right scheme, rather than the exclusive right currently contained in Article 17 DSMD. The remuneration right scheme has a different premise, namely that the amount to be compensated per stream is fixed in advance. The current situation allows the rightsholder to negotiate after he or she has filed a notice with the OCSSP. As a result, OCSSPs are forced to delete the relevant content until a licensing agreement has been concluded. Additionally, not all rightsholders are incentivized to conclude a license in the first place. Some rightsholders, like film producers, who did not want to accept that their movies would be available on YouTube in their entirety, opposed a remuneration right. The same reservations could apply to owners of copyright in in the context of a remuneration rights scheme, who would not like to suffer from the competition of free YouTube availability.158 Within the remuneration right scheme, a solution could be to introduce a set of standard licenses. Rightsholders are ensured a remuneration, and are alternatively allowed to opt-out and delete their content from the platform. The only negotiations that would have to take place aim to make sure the standard licenses ensure a fair remuneration. Because artists can now merge, and because the importance of a wide range of content for YouTube is crucially important, the negotiations should guarantee a fair remuneration for artists. The negotiating position of artists might even enable them to present YouTube with a collective ‘take-it-or-leave it’ offer. At the same time, artists will have an incentive not to charge too much, because they themselves have an interest in YouTube as a channel for bringing their works under the attention of the public. I expect that this will create a fair economic balance between the ability for YouTube to offer a large variety of content, and the interest for artists to receive fair remuneration for their work. 158 Dusollier, S. (2020), p. 1014 – 1015. 32 If successful, fair remuneration becomes the norm. Additionally, the principle of Article 17 DSMD is maintained, as OCSSPs are in principle directly liable with regards to unlicensed content. At the same time, users’ fundamental rights are respected, as content would no longer need to be deleted from the outset. This situation would offer a simple and cost-effective alternative both for artists as OCSSPs. Whenever a rightsholder opts out, the relevant OCSSP becomes directly liable through Article 17 DSMD. In my opinion, this solution is fair for both smaller and larger artists. Small artists have the security of receiving income for each stream, which they lacked under the Safe Harbor legislation. The same applies to larger artists as the income rises linearly with the number of views. Of course, YouTube also continues to pay part of the advertising revenue to rightsholders. This could also be incorporated in the licenses mentioned above. Apart from benefiting rightsholders, the remuneration right scheme would solve several problems for OCSSPs. First, with standard licenses, the cost for an OCSSP to conclude licenses would decrease significantly, as licensing agreements would be concluded on equal terms and negotiations would no longer be the norm. Second, after the implementation of the DSMD, there is no period in which content will have to be temporarily removed in order to comply with regulation. All rightsholders who provided the OCSSP with relevant information, will automatically receive their remunerations. In turn, unknown rightsholders are incentivized to provide their information to be able to participate in the new model. OCSSPs will not be disrupted as profound as under Article 17 DSMD in its current state, and users’ fundamental rights and the participative web are preserved. Additionally, effective licensing partially prevents OCSSPs from resorting to upload filters. While opting out of a standard license is not common in copyright law, there are more unusual provisions in the DSMD, like for example the shift from a ‘notice-and-takedown’ procedure towards a ‘takedown, unless’ procedure. At least this solution would ensure the highly sought-after revenue stream, while negating negative effects regarding licensing obligations as identified in Chapter 4. In my view, this solution, while it touches the acquis of copyright law, provides a daring proposal that will fundamentally change copyright law in the area of online intermediaries, and makes it less fragmented across Europe. Paragraph 5.3 will investigate whether challenges regarding upload filters could be (partially) mitigated as well. 5.3 Upload Filters Best Efforts One of the cumulative requirements of Article 17(4) DSMD is that an OCSSP must make best efforts to conclude a licensing deal. We have seen that it remains unclear how an OCSSP should prove they made best efforts, especially if they failed to close a deal because the rightsholder is unknown. While it is possible that the implementation of the DSMD in Member States may bring more clarity concerning the best efforts requirement of Article 17(4) DSMD, the contrary is conceivable as well. This ambiguity might cause Member States to implement this aspect in different ways. I would therefore like to propose a solution to uncertainties surrounding the interpretation of the concept of best efforts, before addressing other challenges regarding upload filters. In cases where the rightsholder is known, and content clearly infringing, OCSSPs must actively try to contact them in order to conclude a licensing agreement. In other cases, where the rightsholder is unknown or material is not clearly infringing, OCSSPs should be allowed to assume a more passive attitude. During this period of time, they are allowed to keep content online, assuming they continue to do their best to find the relevant rightsholders. Whenever a rightsholder or collecting society presents itself, the relevant OCSSP will take active action to conclude a licensing agreement.159 Only when a licensing agreement is a realistic possibility and fails is the OCSSP obliged to remove content from its platform and apply CRT in the future. In my view, OCSSPs should then be granted a reasonable amount of time in order to detect and delete all relevant content, before being directly liable. This period could for example be one day, i.e. 24 hours. Other solutions could emerge from the stakeholder dialogue mandated by Section 17(10) DSMD to develop 159 See, Metzger, A. et al. (2020), at 6; Grisse, K. (2019), 892. 33 guidance on the application of the new video sharing platform regime. Unfortunately, this dialogue, which was launched in October 2019, has so far mainly resulted in an unhelpful repetition of the lobbying that accompanied the adoption of the Directive.160 This paragraph will continue with a discussion on transparency and over-blocking. (Lack of) Transparency and Over-Blocking First, every decision of an OCSSP should be as transparent as possible. Users should be notified when their content is blocked, both ex-ante as ex-post. Where possible, this notification should include (1) why their post is being removed, (2) under what conditions the post would have been allowed, (3) and what their rights are to appeal the decision. By the very nature of algorithms, it is simply impossible to understand how an AI arrives at a decision. Requirement (1) can therefore only be fulfilled in certain cases, where the decision was (partly) a consequence of human review. Requirements (2) and (3) should be fulfilled for every automated decision. They enable users to use the complaint and redress mechanism, which will postpone the deletion of their files until it has been subjected to human review. In this regard I would like to refer back to the solution given in section 5.2. Second, I believe that users can play an important role in (partially) preventing CRT to over-block. In the current situation, the party who knows best to qualify or define certain content is the user him- or herself. In my view, users should be asked to tag their content before being posted. Tags could include one of the exceptions for fair use such as pastiche, parody or review. CRT will recognize these tags and make an automatic decision as it would normally. Tagged content which is deemed unlawful by CRT would then be automatically subjected to human review. In turn, this review will be more effective and less time-consuming, as the employee knows beforehand what exception to look for. As a result, the whole process is less disruptive for OCSSPs, and the risk of over- blocking, i.e. cases in which CRT does not recognize an exception and blocks it, is greatly reduced. Additionally, all exceptions given in the DSM Directive will be harmonized across Europe and all users can use them accordingly. However, it is possible that users will abuse this system. In order to prevent users from excessive tagging, they must also provide their post with a short explanation as to why they feel their content should be allowed under one of the exceptions. Additionally, the amount of tags per post could be capped at one. In turn, the employee would only have to investigate one exception per case, instead of all of them. This forces users to carefully consider their tag and short description before posting it. At the same time, excessive tagging is avoided and OCSSPs only have to manually compare the alleged infringing content to one of the exceptions of Article 17(7) DSMD. 5.4 Fundamental rights Incompatibility with Fundamental Rights – Users’ rights Both over-blocking and false positives infringe users’ rights to freedom of expression and information. Article 17(9) DSMD prescribes an effective complaint and redress mechanism which enables users to challenge decisions to block content. However, if content should be removed in the meantime, users’ rights are temporarily violated. At the same time, rightsholders’ rights are violated when content remains online temporarily, and it turns out afterwards that it was infringing. I believe that the wording of Article 17 DSMD suggests that the freedom of information and expression must initially prevail over that of the copyright holders. If Article 17(7) DSMD is interpreted in this way, disputed content remains online until it has been deemed unlawful by human review. This is in line with the reasoning of the European Commission and European Parliament during the hearing on the Polish challenge to Article 17 DSMD.161 Preventive blocking of content which needs an individual assessment is not backed by the legislative history of Article 17 DSMD, because “Article 17 now includes strong language that establishes new user rights and provides meaningful safeguards for preserving these rights.”162 These additions were essential in securing its adoption by the EU legislator, and should thus remain at least equally as important. 160 Dusollier, S. (2020), p. 1020 et seq. 161 Geiger, C. & Justin Jütte, B. (2021), p. 43, see further: Keller, CJEU hearing in the Polish challenge to Article 17: Not even the supporters of the provision agree on how it should work, available at: http://copyrightblog.kluweriplaw.com. 162 Communia Association. (2020). 34 I would therefore suggest a compromise. If a user invokes his right to a complaint and redress mechanism, there should be a period in which the relevant OCSSP gets to investigate the case and make a decision, for example 48 hours. In the meantime, the content remains online. If it turns out afterwards that the content does indeed have to be removed, the relevant OCSSP will pay a reasonable monetization and must ensure that the content is filtered in the future. If it is not infringing, the content can simply remain online. This way, fundamental rights of the relevant user are not encroached upon and rightsholders are reasonably compensated. Additionally, a timeframe of 48 hours ensures the mechanism is effective and expeditious, while still respecting the fact that human review may in some cases take longer than expected. Depending on how much the complaint and redress mechanism is used, manually reviewing all content could prove a difficult task and may take some time. Still, 48 hours should be seen as a maximum and an OCSSP should try to carry out the human review as soon as possible. Users’ rights must prevail where there is a conflict between fundamental rights, at least until content is reviewed by a human. This is also in line with the licensing scheme as mentioned in paragraph 5.2, as temporarily keeping content online immediately triggers an obligation to remuneration per view. Furthermore, this procedure would ensure users can rely on the exceptions of quotation, criticism, review, caricature, parody or pastiche. However, this brings me to an even more concerning point of discussion. Solutions mentioned above would mean that preventive control by means of CRT does not comply with the fact that limitations on fundamental rights must be proportional and necessary. In this regard it will be interesting what the CJEU will decide in Case C- 401/19, Poland v. Parliament and Council, in which this question has been brought up. Still, should the DSM Directive be implemented in its current form, I believe the solutions mentioned above to be efficient and effective with regards to better protection of the right to freedom of expression and information. In the next paragraph we will investigate whether the right to conduct a business could be respected to a larger extent as well. Incompatibility with Fundamental Rights – Right to conduct a business First, in addition to the implementation of the complaint and redress mechanism with an emphasis on users’ rights, it could be carried out by a neutral institution at EU-level. This ensures that important decisions relating to fundamental rights are no longer outsourced to private platform operators. The tasks delegated to the institution would include human-level content review and monitoring of the implementation of the DSMD in a manner consistent with fundamental rights, including the right to conduct a business, as well as the development of standards and/or best practices. The institution, which could for example be shaped with the Facebook and Instagram oversight board in mind, would be funded by an independent trust. The board would be at the center of the crossroads of various fundamental rights and interests involved, without favoring one over the other out of self-interest. After all, we have found that Article 17 DSMD in its current form will not lead to a fair balance of fundamental rights. An EU-level institution could therefore more realistically contribute through various mechanisms to maintaining a fair and proper balance between the fundamental rights at stake. Furthermore, this solution would ensure EU-level harmonization and OCSSPs will no longer be obliged to weigh fundamental rights themselves, which will reduce over-blocking as a result of de-risking as well as the delegation trade-off. Ultimately, an OCSSP is a commercial enterprise which is influenced by its own economic incentives. As the board would publicly share statements about made decisions and the rationale behind them, and because they will release annual reports, transparency is increased. Second, for smaller and medium-sized OCSSPs, best efforts to implement CRT should be interpreted differently. After all, the obligations following from Article 17 DSMD generally are too onerous for smaller platforms to implement, due to high costs for development and/or licensing of technology. The lack of effective CRT should not immediately be punished on the basis of the same criteria as a giant like YouTube. Allowing these specific OCSSPs some leeway ensures their right to conduct a business. I believe this could be achieved by introducing an extra exception which is placed between the exception of Article 17(6) DSMD and full liability following from Article 17(4) DSMD. For example, the new paragraph could force medium sized OCSSPs to make best efforts to implement the different elements prescribed by Article 17 DSMD. They should be granted a reasonable ‘interim’ period in which best efforts suffice. After the period has ended, they are fully liable under the DSMD in the same 35 manner giants like YouTube are. 36 Chapter 6: Conclusion This thesis focused on Article 17 of the DSMD, which was largely created as a tool to solve the value gap. This chapter aims to summarize the main findings per chapter. Subsequently, the main question will be answered: In what ways could challenges Article 17 DSMD poses to our current legal framework, the open web 2.0 and fundamental rights involved be resolved? What is the definition of the Value Gap? In Chapter 2 we established the definition of a UGC-platform. YouTube serves as a clear example of a large UGC- platform which played a big role in the creation of the so-called Value Gap. The need for a solution largely depends on the actual scale of the Value Gap. Chapter 2 therefore established that subscription platforms 163 have 55,8% less users than YouTube, yet are generating 397,9% more revenue. This is cause for concern, and appears to form the basis for a new legal landscape within copyright law with respect to online intermediaries. First, this begs the question what conditions allowed the Value Gap to develop. What is the existing regulatory framework for liability of intermediary service providers such as YouTube? Chapter 3 aimed to investigate the current legislative regime, which has enabled the Value Gap to expand. First, Article 14 ECD contains the Safe Harbor provision, which shields intermediaries (ISSPs) from direct liability for infringing content posted by its users. There is only an ex post obligation to delete infringing content of which the ISSP has been notified by the relevant rightsholder. Safe Harbor protection is only valid for neutral service providers. While there was no agreement on the definition of a neutral or active service provider at the time of writing, Chapter 3 has established that UGC-platforms like YouTube currently do enjoy Safe Harbor protection. Indeed, the final verdict in Case C-682/18 YouTube has been reached as of June 22nd 2021 and seems to confirm this state of affairs. The court ruled that “the fact […] that the operator of a video-sharing platform, such as YouTube, implements technological measures aimed at detecting, among the videos communicated to the public via its platform, content which may infringe copyright, does not mean that, by doing so, that operator plays an active role giving it knowledge of and control over the content of those videos.164 In addition to blocking infringing content after a notice from the relevant rightsholder, an ISSP may be ordered to monitor and block future infringing content, even if the service provider enjoys Safe Harbor protection. A specific monitoring obligation is not prohibited, if and when the obligation is not excessive, well specified in the injunction, and does not force OCSSPs to carry out independent assessments. Second, it was unclear whether or not an ISSP carries out a communication to the public. However, case C- 682/18 YouTube dealt with this question as well. The final verdict states that operators of online platforms do not, in principle, themselves make a communication to the public of copyright-protected content illegally posted online by users of those platforms.165 This shows that ISSPs are indeed in principle not liable for content posted by its users, allowing them to make money from rightsholders' content without paying a fair remuneration. Again, ISSPs can be requested to take and/or keep content offline, even if they are not directly liable on the basis of communications to the public. This situation has allowed for the Value Gap to grow, and in due time the European Commission introduced a solution in the form of Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC. In what ways does the new liability regime differ from the old situation, and what challenges does it pose? Article 17 DSMD introduces a new subcategory of online intermediaries which explicitly do not enjoy Safe Harbor protection: Online Content-Sharing Service Providers. OCSSPs are hosting providers which organize and promote content and therefore are active hosts. This introduces a lex specialis for cases concerning copyright protected material on platforms like YouTube. 163 For example: Spotify, Apple Music, Tidal & Deezer. 164 Joined Cases C-682/18 and C-683/18 YouTube, para. 109. 165 Joined Cases C-682/18 and C-683/18 YouTube, para. 109. 37 Additionally, Article 17(1) DSMD states that OCSSPs do carry out a communication to the public. It equates the hosting, organization and promotion of copyright-protected content (active host requirements) with a communication to the public. At the same time, Article 17(3) DSMD ensures that it shall not affect the possible application of Article 14(1) ECD for purposes falling outside the scope of the DSMD. The fact that OCSSPs communicate to the public lays the foundation for a direct liability regime. This shows us that Article 17 DSMD acts from a different perspective. Instead of the notice and takedown procedure, I therefore defined the new situation as takedown, unless. ‘Unless’ refers to two exceptions through which an OCSSP can escape liability. First, there is the ability to conclude licensing deals. Second, OCSSPs failing to conclude a license are in principle directly liable, unless they have made best efforts to comply with Article 17(4) DSMD. This de facto equals the mandatory implementation of upload filters. Consequently, the DSMD differs from the old legislative regime on several crucial points. First, Article 17 DSMD introduces legal uncertainty with regards to the definition of an OCSSP, the definitions of a neutral and active host as well as that of communications to the public. Second, the implementation of licensing obligations is too onerous, as it requires an extreme amount of expenses to be incurred at one time and content must be temporarily removed in the meantime. Third, upload filters are problematic in a broader sense. It is unclear what the best efforts requirement of Article 17(4) DSMD means if and when an authorization attempt fails. Content ID will cause false positives and over-blocking of legal content without making decisions insightful. Additionally, Article 17 DSMD might prove problematic for fundamental rights as well, as filtering obligations de facto introduce a barrier to the free flow of content of which OCSSPs were unable to authorize the use. Over- blocking breaches fundamental rights of users. In effect, OCSSPs cannot fulfill their Article 17 DSMD obligations without encroaching upon fundamental rights of its users. The DSMD therefore fails to balance fundamental rights involved and OCSSPs are incentivized to prioritize their own interest instead. Lastly, freedom to conduct a business is restricted, especially for small and medium OCSSPs which do not (yet) have a giant revenue stream. In what ways, if at all, could the challenges identified in chapter four be addressed? The DSM Directive, in principle, is a suitable instrument to close the Value Gap. However, adjustments will have to be made to balance fundamental rights and minimize challenges mentioned above. In this regard, the following possible solutions have been identified: First, I believe that stakeholder dialogue will solve legal uncertainties in due time. The commission will issue Guidelines based on the outcomes, which, in turn, ensure increased uniformity across Europe. In this manner, legal uncertainty regarding active and neutral hosts, the interpretation of communications to the public and the definition of general monitoring obligations and equal content will eventually be solved. Moreover, this enables stakeholders to actively participate in negotiations. Second, the authorization avenue favored in Article 17 DSMD, requiring OCSSPs to close licensing deals, could have been shaped differently. Instead of an exclusive right for rightsholders to communicate to the public, I suggest a fair remuneration scheme in which the rightsholder is remunerated according to collective licenses. Rightsholders can opt-out at any time. A fair remuneration scheme greatly reduces the costs for an OCSSP to conclude licenses on a large scale, as there would only need to be a set of standard licenses which apply to the relevant rightsholders. Content would no longer be temporarily removed in anticipation of a license, favoring users’ fundamental rights. Additionally, unknown rightsholders are now incentivized to provide relevant information to the platform. In turn, OCSSPs will not be disrupted as profound as under Article 17 DSMD in its current state, and fundamental rights and the participative web are preserved. Additionally, effective licensing prevents OCSSPs from resorting to upload filters. Third, challenges arose with regards to upload filters. In cases where the rightsholder is unknown or material is not clearly infringing, OCSSPs must be allowed to assume a more passive attitude and keep content online. Only 38 when the rightsholder presents him/herself, is the OCSSP obliged to take action to conclude a licensing agreement. This interpretation of the best efforts requirement of Article 17(4) DSMD to conclude licenses ensures more uniformity between Member States, as well as increased legal certainty and a diverse range of content. Additionally, users could play an important role in the prevention of over-blocking by tagging their content before uploading it to a UGC-platform. Tags could include one of the exceptions for fair use such as pastiche, parody or review. Human review would then be more efficient. As a result, the whole process is less disruptive for OCSSPs, and the risk of over-blocking, i.e. cases in which CRT does not recognize certain content as an exception and blocks it, is greatly reduced. Additionally, all exceptions given in the DSM Directive will be harmonized across Europe and users can use them accordingly. Furthermore, users should be notified when their content is blocked. Where possible, this notification should include (1) why their post is being removed, (2) under what conditions the post would have been allowed, (3) and what their rights are to appeal the decision. In order to further protect users’ fundamental rights, and because preventive blocking of content which needs an individual assessment is not backed by the legislative history of Article 17 DSMD, I suggested the following. Whenever a user invokes his or her right to a complaint and redress mechanism, there should be a period in which the relevant OCSSP gets to investigate the case and find out whether the content is infringing or not. In the meantime, the investigated content remains online. If it turns out afterwards that the content does indeed have to be removed, the relevant OCSSP will pay a reasonable monetization and must ensure that the content is filtered in the future. If it is not infringing, the content can simply remain online. This way, fundamental rights of the relevant users is not violated and rightsholders are reasonably compensated. With regards to the right to conduct a business, the complaint and redress mechanism mentioned above could be carried out by a neutral institution at EU-level. The institution could be shaped in the form of an oversight board, comparable to what Facebook and Instagram have implemented recently. An EU-level institution could more realistically contribute through various mechanisms to maintaining a fair and proper balance between the fundamental rights at stake. Furthermore, this solution would ensure EU-level harmonization and OCSSPs will no longer be obliged to weigh fundamental rights themselves, which will reduce over-blocking as a result of de- risking as well as the delegation trade-off. Its task would include human-level content review and monitoring of the implementation of the DSMD in a manner consistent with fundamental rights, including the right to conduct business, and development of standards and/or best practices. The board could be funded by an independent trust. Additionally, with regards to small and medium OCSSPs not covered by the exception of Article 17(6) DSMD, best efforts to implement CRT should be interpreted differently in order to guarantee the right to conduct a business. this could be achieved by introducing an extra exception which is placed between the exception of Article 17(6) DSMD and full liability following from Article 17(4) DSMD. They should be granted a reasonable period of time in which best efforts suffice. Only after this period has expired are they fully liable under the DSMD in the same manner giants like YouTube are. 39 Bibliography LEGISLATION The European Parliament and the Council of the European Union. (2019). Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC. The European Parliament and the Council of the European Union. (2004). Directive (EU) 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights. The European Parliament and the Council of the European Union. (2001). Directive (EU) 2001/29/EC on the harmonization of certain aspects of copyright and related rights in the information society. The European Parliament and the Council of the European Union. (2000). Directive (EU) 2000/31/EC on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market. BOOKS & JOURNALS Angelopoulos, C. & Senftleben, M. (2021). The odyssey of the Prohibition on General Monitoring Obligations on the Way to the Digital Services Act: Between Article 15 of the E-Commerce Directive and Article 17 of the Directive on Copyright in the Digital Single Market. SSRN. Angelopoulos, C. & Quintais, J.P. (2019). Fixing Copyright Reform: A better solution to online infringement. JIPITEC, 10(2), 147 – 172. Angelopoulos, C. (2013). Beyond the Safe Harbours: Harmonising Substantive Intermediary Liability for Copyright Infringement in Europe., Intellectual Property Quarterly, 3, 253 – 274. Arditti, D. (2020). iTake-Over: The Recording Industry in the Streaming Era (2nd ed.). Lexington Books. Bagshaw, R. (2003). Downloading Torts: An English Introduction to On-Line Torts in: Snijders, H. and Weatherill, S. E-Commerce Law: National and Transnational Topics and Perspectives, Kluwer Law International. Barker, G.R. (2019). Global Music Revenues, Music Streaming and The Global Music Value Gap. SSRN, retrieved from papers.ssrn.com. Burgess, J. & Green, J. (et al.). (2009). YouTube: online video and participatory culture (1st ed.). Cambridge, UK: Polity Press. Bridy, A. (2019). The price of closing the “Value Gap”: How the music industry hacked EU Copyright Reform. VAND. J. ENT. & TECH. L, 22, 323 – 358. Burk, L. (2019). Algorithmic Fair Use. University of Chicago Law Review, 283, p. 283 – 307. Burri, M. & Zihlmann, Z. (2021). Intermediaries’ Liability in Light of the Recent EU Copyright Reform. Indian Journal of Intellectuel Property Law, 12, 1 – 24. Christodoulides, G. (et al.) (2012). Memo to Marketers: Quantitative Evidence for Change - How User- Generated Content Really Affects Brands. Journal of Advertising Research, 52(1), 1 – 19. Depoorter, B. & Kirk Walker, R. (2013). Copyright False Positives. Notre Dame Law Review, 89(1), 319 – 359. Dusollier, S. (2020). The 2019 Directive on Copyright in the Digital Single Market: Some progress, a Few Bad Choices, and an Overall Failed Ambition. Common Market Law Review, 979, p. 979 – 1030. Frosio, G. (2018). Why Keep a Dog and Bark Yourself? From Intermediary Liability to Responsibility. Oxford Journal of Int’l Law & Inf. Tech, 26(1), 1 – 33. DOI: 10.1093/ijlit/eax021. Frosio, G. (2019). Reforming the C-DSM Reform: A user-based copyright theory for commonplace creativity. SSRN, retrieved from:: 10.2139/ssrn.3482523. 40
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-