Generative AI Tools Can Present IP Risks, But They’re Manageable
Posted in AI
The sudden increase in news coverage of generative artificial intelligence (AI) tools like ChatGPT and Midjourney has employees excited to discover how these accessible tools can make their jobs easier. Employers are concerned about the legal implications of using such tools, and they are exploring different approaches for their own AI usage policies.
However, blanket policies that try to account for the risks that can come from using any AI technology can be overly restrictive or so generic that they do not properly address the legal considerations for AI usage. The rules for using AI technology under an enterprise license are likely more permissive than those for consumer-facing AI tools, as the former will likely have broader confidentiality and indemnity protections than the latter. For this reason, employee guidance on AI usage should specifically identify the AI tools subject to particular guidance and specifically differentiate between AI tools that may be offered under both a personal and an enterprise license, like ChatGPT. Those interested in a more general conversation about current AI frameworks can see our article here.
Broadly speaking, AI tools can present both incoming and outgoing intellectual property issues. That is, usage can increase a company’s risk of infringing third-party intellectual property, and it can also increase a company’s risk that its own intellectual property will be improperly disclosed. In evaluating specific use cases, the following questions should be asked: