Article cover image: AI: Where we’re at two years on

AI: Where we’re at two years on

CEO Barbara Hayes discusses what has happened in the two years since generative AI burst onto the scene, as well as the journey ALCS has been on to ensure writers' rights and livelihoods are protected.

Since the launch of ChatGPT two years ago, the use of copyright-protected works to train generative AI (systems designed to create new content by ‘training’ on original material) has been a source of constant debate.

That’s why, earlier this year, we ran a survey to better understand our members’ views on the subject. It produced the largest response of any survey we’ve ever conducted and revealed a range of opinions, many concerns and some confusion from authors wondering what is happening to their works. On 3 December we’re going to release those findings in full at the winter All Party Parliamentary Writers Group meeting.  

When the Government launches its much-anticipated consultation on this subject in the coming weeks, our primary ask, echoed by the majority of the 13,500 respondents to our survey, will be for legal regulations requiring far greater transparency from AI companies, on which works are being used to train which systems, why, where and by whom.  

Without greater transparency it is hard to see how any properly functioning system can develop where authors and other rightsholders can, if they choose, license their works for use in the training of AI systems. As we set out in a statement on AI in the spring, ALCS views choice for authors as essential. It’s clear from the responses that our members agree.  

The survey was also clear that our members expect ALCS to do something on their behalf. So, through our advocacy work we will continue to push back on bad policy options, like new copyright exceptions or legal requirements that authors should opt-out their works if they don’t want them to be freely accessible by AI companies for training purposes. We don’t believe that these are workable. Others, like Ed Newton-Rex, the AI executive who mobilised 35,000 creators from around the world to sign a petition stating that AI is a threat to livelihoods of creators, don’t think so either.  

In our licensing work, we have been working with partners in the Copyright Licensing Agency, to develop potential models for collective licensing, to offer alternative solutions to writers in the broader licensing marketplace, licence models that would be based on transparency and choice.  

The main outcome of this work is a proposal for a text and data mining (TDM) and ‘prompt’ licence. This will not involve the use of works for generative AI training purposes; rather it covers existing and increasingly commonplace activities in the workplace where individuals use the available technology to analyse and extract information from works, for example by summarising the key points within an academic journal article. 

We know that licensing the training of generative AI systems is a much broader and more complex area. This year we have seen various deals between publishers and AI companies as well as a growth in the amount of litigation between these two groups.  

Collective licensing, such as that carried out by ALCS, only arises in cases where direct deals are impossible. It may be that, in the future, this applies to the training of generative AI, however, that seems relatively distant right now, not least because many of the AI companies are robustly advocating for copyright exceptions while carefully selecting publishers to do deals with.  

Our survey results indicate that our members would like ALCS to provide an option for licensing generative AI training should the conditions for collective licensing arise in the future, whether for compensation for past uses or a licence for future uses.  

For that reason, at our AGM this year we’ve asked our members to vote on an addition to the current mandate which would cover text and data mining, use for prompting, and use for training generative AI. The reticence of the technology companies to seek licensing and their arguments that ‘this is fair-use’, suggest that collective licences for training purposes are some way off, but we do believe we need to be prepared for future opportunities.  

We would operate on the basis that individual members would have the right to opt-in to any training licences rather than opt-out, to ensure that members were consenting fully to being part of any new licences or collections. We think transparency and choice are key, and that is what is missing from the current situation.   

We are very mindful of the sensitivities and differing views out there. In developing any models for collective licencing around AI training, ALCS will consult with and seek the approval of the authors’ unions and the collective bodies representing literary agents before moving forward on any licensing in this area.  

As we continue this journey through an ever-changing technological landscape, we will continue to advocate strongly on our members’ behalf against policies or regulations, in the UK and internationally, which will remove or diminish authors’ rights. I don’t know at the time of writing if it will be the Government consultation, or our AI findings report that will be launched first, but I do know that ALCS has been given a very clear steer from our members, so thank you once again for telling us.  

You’ll be able to read the findings of the ALCS AI survey from midday, 3 December at www.alcs.co.uk