Communication Compliance with Generative AI Technology

Contact Us

Contact Us

[contact-form-7 404 "Not Found"]

Major regulators all over the world have begun sounding the alarm over the rise of generative AI (artificial intelligence) and the technology’s ramifications on compliance.

Industry watchdogs, especially from banking and investment management, are keen on being on top of advancements in the field, given the billions of dollars at stake. As a result, regulators across industries have been frantically churning out risk assessment reports, new regulations, and employee guidelines to bolster compliance.

The sweeping overhaul of existing compliance infrastructure is also quite justified, given that not a day goes by without a generative AI-related scam or controversy. One of the most chilling cases was when fraudsters used deep fake technology to trick an employee at a Hong Kong-based multinational company into paying out over $25 million, pretending to be the company’s Chief Financial Officer.

However, the same institutions that categorized the technology as an emerging risk have also gone on to tout the value of the technology when it comes to enforcing compliance norms, giving compliance officers at regulated firms the clearest possible signal that generative AI is a double-edged sword that requires deep consideration.

In the last few years, regulators have demonstrated an incredible willingness to step up to new-age practices of compliance evasion, handing out massive penalties, such as the infamous WhatsApp fines to bankers who used personal devices and unofficial communication platforms to talk about work.

Similarly, the regulators have swung into action putting in place new regulations and teams to investigate compliance challenges arising from generative AI. Notably, the Financial Industry Regulatory Authority (FINRA) categorized AI as an emerging risk in its annual regulatory report and went on to say deploying the technology would potentially implicate all aspects of broker-dealer’s regulatory obligations, while also highlighting the need for enhanced focus in areas, such as Books & Records, Communication with the Public, Supervision, and Customer Information Protection.

Meanwhile, the U.S. Securities and Exchange Commission (SEC) has already set up a specialized team to handle emerging risks in the area. They have further put forth proposals that would require comprehensive recordkeeping of all written records of “all covered technologies used in investor interactions.” It is worth mentioning that covered technology includes a whole variety of AI tools used for making investment recommendations or a registered investment advisor (RIA) interacting with investors, including tools, like chatbots.

Aside from industry regulators, world governments, including the US administration and the EU have proposed regulations for AI to protect sensitive information, which is a welcome move, since it was found that hundreds of times a week, employees input sensitive corporate data into large language models that incorporate it into their publicly available knowledge base.

Communication compliance risks fueling regulator concerns

Per FINRA, the proliferation of AI tools has led to applications of the technology that are “fraudulent, nefarious, discriminatory, or unfair”.

Given that generative AI is still in its nascent stages, it may cause risks to regulated entities both in finance and other domains, including healthcare, IT, and even government agencies that need to comply with communication compliance requirements.

These risks typically involve:

Lack of transparency

Firms may not have the capability to keep records of employee interactions with generative AI tools, such as chatbots, where they may share sensitive company information that could become part of their training data. Finance firms could be looking at potential recordkeeping violations per FINRA Rule 4510 (Books and Records Requirements) and Securities Exchange Act (SEA) Rules 17a-3 and 17a-4 (Books and Records Requirements).

Accountability challenges

Apart from issues with employees entering sensitive customer or company information into LLMs, there is also the issue of not being able to hold anyone accountable for problematic, biased, or erroneous AI-generated communication. This can include auto-replies on instant messengers and email accounts. Notably, even companies with systems in place for accountability, such as text message archiving and WhatsApp monitoring might end up with issues when it comes to fixing these mishaps.

Customer privacy concerns

While recordkeeping of communication with clients is required by law, generative AI models may offer unethical levels of analytical capabilities to broker-dealers. Companies may end up collecting private information without consent from social media accounts, text messages, video calls, etc., at scale about customers for targeted advertising, which is increasingly frowned upon, or end up with information that they aren’t supposed to know, due to the power of predictive analytics. Doing so can cause companies to violate a host of regulations, including the GDPR.

Copyright violations

As most regulated entities are required to retain copies of their marketing material for investigations, audits, etc., employing generative AI to create the same could end up leaving the company in violation of copyright law.

Risk of deceptive trade practices

Generative AI chatbots could be a firm’s undoing if they knowingly give the consumers the impression that they are conversing with a real human, be it during customer support interactions or sales calls. The existence of legislation protecting against deceptive trade practices means that the company is at risk of getting sued.

AI-driven surveillance and monitoring applied to communication compliance

Regulators have realized that there is no turning the clock back on generative AI, prompting them to put together specialized teams and regulations to combat the associated risks. Meanwhile, regulated entities also stand to gain a lot by responsibly incorporating AI tools to enforce compliance across all their communication activities.

AI systems are capable of combing through massive datasets of your employee communication to weed out violations of your communication policy. In real-time, red flags can be identified for the compliance team to act, making the workplace safe from toxic employees, while also safeguarding against regulatory violations. Having an automated system go through your communication with a fine-tooth comb could end up saving you a lot of trouble with the regulators, given that they have started to drill down on even the tiniest aspects of communication, expecting emoji compliance, recordkeeping of GIFs, photos, files, and links.

Audit and e-discovery trump card for legal teams

With predictive analytics, technology-assisted review, natural language processing, etc., legal teams can assess documents far more efficiently. They can even filter out/ classify communication based on keywords and context, such as sender info, receiver info, etc., to have an upper hand in investigations and during e-discovery.

Training and continuous refinement of compliance practices

Regulated entities can make use of analytics from their AI-enabled call monitoringWhatsApp call recording, etc., systems to gauge how closely their employees adhere to the company’s communication policy, how productive they are, and potential risk areas. The fact that AI can perform sentiment analysis of conversations further means companies can understand how well their customer support/ sales pitches are being received, while also getting unparalleled insight into employee morale. The latter is especially crucial since employee buy-in is a must for organization-wide compliance.

Cutting off off-channel communication in its tracks

With trends, like remote and hybrid work, and BYOD (Bring Your Own Device), there is the possibility that employees may knowingly or otherwise invite customers to continue their conversations on a personal device or unapproved instant messenger. AI can detect such instances instantaneously and red-flag the interaction.

Financial crime monitoring

With an AI system in place to monitor communication, companies and government agencies with a lot to lose if their data is compromised can rest assured that spam, harmful links, files, and images are identified in real-time and flagged. For enhanced security, AI capabilities can be leveraged to verify everyone’s identity via means, like facial recognition before establishing video calls or sharing sensitive information, like trade recommendations, order confirmations, etc.

Regulators worldwide are grappling with the implications of gen AI, recognizing its potential for fraud, privacy violations, and deceptive practices. However, they are also acknowledging its transformative potential in enhancing surveillance, monitoring, and enforcement of compliance standards. For regulated entities, incorporating AI-driven solutions is a no-brainer because of its capacity to ensure seamless alignment of your employee communication with industry regulation and company policy. Moreover, AI facilitates continuous refinement of compliance practices, provides valuable insights into employee behavior and customer interactions, and helps mitigate off-channel communication risks.

While the risks associated with generative AI are real and multifaceted, proactive adoption of AI-powered compliance solutions can empower organizations to navigate the regulatory landmine effectively while leveraging the full potential of this technology.

To stay updated on the latest compliance trends, follow the TeleMessage mobile compliance blog.

You can also contact us for a demo of our SOC 2-certified mobile archiver that can supercharge your compliance efforts.

Skip to content