The rise of AI-generated content and the growing use of Large Language Models have sparked important questions about their impact on authors' rights. Since Generative AI is relatively new, many legal issues remain unresolved, with ongoing court cases shaping the landscape. This page provides information on these concerns and offers guidance to scholarly authors navigating the current legal and ethical challenges surrounding AI and copyright.
Generally no. U.S. Copyright law requires human authorship. If there is significant creative human involvement in the creation or editing of the AI-generated work, it may qualify for copyright protection. A recent court case revolving around this question was 'Thaler v. Perlmutter,' where a U.S. District Court found that work autonomously generated by an AI model is not copyrightable.
The short answer is: it depends! In some cases, publishers have licensed their catalogs for use in training AI models. A notable example is when the academic publisher Taylor & Francis sold access to their research to Microsoft to be used in AI training. In such cases, the legality of such use of copyrighted work depends overwhelmingly on the terms of the publishing contract between the publisher and the author.
However, even if a publisher or copyright holder has not explicitly licensed their work for AI training, AI models may still be able to legally use those works by claiming that their use constitutes fair use, though the legality of these cases is still largely unresolved.
There is currently no way to know for sure if an AI model has been trained using your research. Some AI companies have published their datasets to various dataset repositories, like Google Dataset Search, though this practice has not been adopted by all AI companies.
It is impossible to fully prevent AI from training on your published work, but you can take steps to limit its use. Publishing in venues that explicitly prohibit AI data scraping and submitting requests to remove content from training datasets when companies offer opt-out mechanisms are potential strategies, but your work may still end up being used to train AI.
Currently, most academic writers are not compensated when their work is used to train AI models. Compensation is typically not offered unless a specific agreement or legal framework mandates it, though ongoing legal and policy discussions may shape this landscape in the future.
Yes, you can use AI-generated content in your scholarly work, but it's essential to proceed with caution. Utah State University advises researchers to avoid inputting confidential, proprietary, or restricted data into AI tools due to concerns about data privacy and ownership. Additionally, publishing standards regarding AI-generated content vary; some journals prohibit it, while others permit it with proper disclosure. Therefore, always check your target publication's policies on AI use. Finally, ensure that any funding agencies or collaborators involved in your research have no restrictions on AI use and are informed about its incorporation into the project. More information on USU's guidance for the use of AI in research can be found here: Artificial Intelligence in Research