Generative artificial intelligence (AI) is garnering attention. Tools like ChatGPT draw from a wide database of information and language to create human-like text in response to various prompts.
Since launching in November 2022, ChatGPT has acquired over 100 million active users. Given the popularity of ChatGPT and similar tools, companies across the globe are wondering how (or whether) to use them.
AI has helped companies create marketing content such as blog posts and social media updates, draft employee policies, and write simple contracts. ChatGPT can even integrate with existing customer service platforms to provide responses in interactive chats.
Many AI tools work by learning a user’s preferences and can receive training based on a company’s needs. This creates tremendous potential for using generative AI to your organization’s benefit. But you must know how to use it safely and properly. Before your company dives into a new AI use case, here are some key issues to be aware of:
The legal implications of using generative AI tools are still forming
Harvard Law School recently released an article titled “The Implications of ChatGPT for Legal Services and Society.” The article argues that one of the challenges to using generative AI tools is their inability “to account for the nuances and complexities of the law.” This may not seem like a big deal to a user looking for a basic answer to a basic question. However, businesses should be careful when taking advice from AI tools or using AI to create contracts, provisions, policies, or other documents with legal force.
For example, suppose a human resources manager asks AI to generate a sick leave policy for employees with mental health issues. The AI tool may produce something that appears coherent and structurally correct, but it’s unlikely to be legally compliant. AI tools frequently fail to consider all relevant factors, including the many details and facets of federal, state, and local laws.
The intersection between AI tools and intellectual property law is still murky
ChatGPT and similar AI platforms aggregate information from available internet sources. While AI content is technically original, it may violate another party’s copyright or otherwise run afoul of intellectual property laws. This problem is compounded by the fact that AI often can’t reliably cite its sources.
A related issue is that U.S. copyright laws have not yet caught up with AI technology. According to ChatGPT’s creator, OpenAI, users retain the legal rights to any inputs they provide, but the ownership of outputs is not yet clear.
Before allowing companywide use of generative AI tools, consider privacy
AI can help you reduce response times, handle basic customer service requests, and even manage schedules, but it’s only as helpful as the information you put in.
And you must be extremely cautious about the information you put in. Although ChatGPT’s privacy policy promises to only share input data to provide its services and never sell input data, it still cautions users not to put sensitive data into its chat box.
Data put into ChatGPT is saved for a period of time, which may expose it to hackers looking to exploit your information. You can protect your company’s and clients’ sensitive information by enforcing a companywide policy of never putting sensitive data into AI tools.
You may wish to take a risk-averse position and refrain from using client names. But if you do choose to use client names (for market, industry, or competitor research), you might consider amending your privacy policy to include the use of certain client data for AI-assisted research purposes.
Generative AI tools like ChatGPT don’t always produce accurate answers
As previously discussed, generative AI tools like GhatGPT can’t reliably cite the sources used to collect data or produce responses. Further, many popular AI platforms work off a stagnant set of data, meaning you won’t get the most up-to-date information in response to your prompt.
Perhaps most troubling, generative AI tools have been known to produce responses that are biased, incendiary, or just plain incorrect. You should always carefully fact-check outputs before using them internally or externally.
SEO is catching up to AI-generated content
Search engines may not penalize you for using AI-generated content now, but they might in the future. You must continue to weigh the pros and cons of using AI-generated content with the expectation that search engines like Google may change their policies.
In April 2023, RockContent reported that “Google is not giving you blanket permission to generate poor-quality content with these tools just for the purpose of tricking the search engines into ranking you higher.”
Furthermore, Business Insider reports that Google is planning to debut a new feature allowing users to identify if an image is real or AI-generated.
VANTREO- Acrisure is here to help. The AI landscape is shifting rapidly, with new use cases uncovered every day. Many small and midsize businesses are seeking outside help with making AI tools work for them. If you have questions regarding AI risk or how insurance does and does not apply to AI business use, please reach out to your legal counsel or ask just Reply here.

