Main Contributor: Gretchen Sturdivan, CSCP, Compliance Manager & Creative Director

Background

There doesn’t yet seem to be a consensus among RIAs in terms of how their firms are using generative artificial intelligence (“AI”) in the workplace. Generative AI is capable of producing documents and art, rather than just producing search results. Some are noodling around with generative AI such as Chat GPT to draft poems, while others find Co-Pilot most helpful for generating amusing cat images. For example, using the prompt, “can you create a picture of a cat on top of a volcano in the style of a comic book.”

However, others are digging in to discover the potential benefits and ways in which generative AI can enhance their reporting capabilities or market research. Which enters us into a new era. Innovations in AI can provide helpful insight, time-saving content, and formatting ideas, but they can also create a slippery slope of cybersecurity threats and risks when users are not well-informed.

This article will address a few areas for consideration when the CCO is developing policies and procedures around the use of generative AI within the firm. The goal of the policies and procedures should be to address cybersecurity threats connected with generative AI use, mitigate risks associated with potential misinformation, protect client data, and ensure compliance with the ever-evolving SEC regulations and exam priorities.

Everyone Can Agree

Employees of an RIA must be made aware that confidential, sensitive, or proprietary information should never be submitted to generative AI services without first conducting due diligence to understand the potential security concerns and/or the controls in place. This includes (but is not limited to) submitting source code, information about your clients, financial information derived from client documents, operational processes, or personally identifiable information. Which brings us to the difference between open and closed AI.

Open/Public AI

In an open AI approach, the code is made publicly available, often as open source software and is free. They are generally not secure platforms to use for investment advisory purposes. Until the potential benefits and risks are better understood and technical controls evolve to block dangerous AI websites and services, the use of open AI web services specifically for conducting business at your firm should be carefully thought through, so that you are not compromising client information and/or proprietary firm information.

However, open-source models are often championed by promoters of ethical AI, as they are generally considered more transparent and understandable than closed-source models.

Closed/Private AI

In a closed AI approach, the code is kept private, and your data is kept within a controlled environment. It is private property that has been made available for public use and as a result, the owners can be incentivized to ensure their models don’t have data compromises or unauthorized access.

Closed AI would be the safest model for an RIA to use for business purposes.

Security Considerations

When using generative AI models and tools, the following precautions should be taken to protect against the loss or misuse of client records and information:

  • Ensure your account has MFA enabled, where possible.

  • For open AI, disable chat history and training within the program’s settings.

  • If there's a business need to use generative AI with client data, it must be performed on approved, secure, and isolated platforms, ensuring that the data cannot be accessed or retained by third parties.

Fact Check

It can be tempting to take the first response you see that is generated by AI, for example Google’s “AI Overview.” However, it’s critical to remember that the AI is combining information from a range of sources, learning patterns and structures from the data it is trained on, and it still needs to be fact-checked and reviewed prior to use and/or distribution. It can provide inaccurate information that should not be taken as fact. Large Language Models (“LLMs”) simulate comprehension but lack genuine understanding. This can result in errors or inappropriate outputs in complex scenarios.

Another example can be found with AI notetaking, as many are noticing that it will focus on the wrong points and generate inaccurate summaries that miss the gist of the conversation. Additionally, if generative AI is used for marketing material production, you will want to confirm that it does not contain any false or misleading statements of fact and is in line with the general prohibitions (i.e., substantiation of statements of fact).

Where AI Is Found

Though it seems hard to wrap your arms around, the CCO must develop an understanding of how AI is being used at the firm and/or by the firm’s third party service providers. Document anywhere it is used for advisory operations, including portfolio management, trading, marketing, and supervisory compliance. Consider whether or not your firm has Co-Pilot installed for all users, whether your video calls are using AI notetaking, or if you have email drafting set up with AI. Figure out how employees are using AI models and tools and document that within the policy, ensuring that any controls around its use are also documented. For example, if you have AI notetaking in place, document how the notes are reviewed for accuracy. Keep in mind, that you may also need to obtain consent from clients and vendors before using AI Notetaking and maintain documentation of consent.

Training

All employees should receive periodic training covering the use of generative AI tools. The training should highlight the strengths, limitations, and potential pitfalls of using these technologies. The training and policies are ever evolving as the technology changes and employees should be aware of that fact. Employees should also be made aware of the fraudulent risks associated with AI use. This can include voice hacking, targeted phishing, social engineering to generate password lists, fake businesses, etc.  For example, Schwab uses a voice ID service, and generative AI could be used to mimic a voice with enough accuracy to fool the system -- though Schwab feels their system “would not be so easily fooled, since they analyze discrete speech markers.” Regardless, it’s something to ensure employees keep in mind and consider when using generative AI tools.

Disclosures

Whenever generative AI tools have played a significant role in generating content or data presented to your client, they should be informed through disclosures. This includes marketing materials. The disclosure should include:

  • The nature and extent of AI involvement in the firm’s investment process and/or advisory operations.

  • The risks associated with those AI tools.

  • Any conflicts of interest.

Record Keeping and Risk Management

Records of AI interactions that influence business decisions, client interactions, or regulatory matters should be maintained for a period consistent with SEC record-keeping requirements. This includes storing the raw outputs from the AI, input data, and any subsequent edits or validations made by employees.

The RIA should maintain documentation that supports the identified risks and review of everywhere AI is in use both internally and externally through service providers. It can be helpful to do this within your risk assessment or due diligence review.

Conclusion

As generative AI continues to evolve, so should the policies and procedures of an RIA. It remains critical to the security of your firm’s client and proprietary information, to ensure that you have documented everywhere AI is in use within and outside of your firm and determine the associated risks. Put procedures in place to remove or mitigate the risks and continually re-evaluate the controls in place as the technology changes. Train your employees on the policies your firm develops, as human error is often the cause of cybersecurity issues. Along the same lines, consider access rights to generative AI tools and whether or not everyone at the firm needs to use it for business purposes.

As you are able, conduct regular vulnerability and penetration tests on the AI tools in use at your firm and review the testing that your service providers conduct. Have conversations with your employees to ensure you, as the CCO, are aware of how your employees are using AI to best protect the firm. Determine if they are just creating cat pictures or putting together client reporting with AI so that you are not in the dark. This could consist of establishing an AI Committee who determines appropriate use of AI at the firm and is responsible for updating the policies and procedures.

Next
Next

Roles and Responsibilities of the Chief Compliance Officer