Email Newsletters

Expert’s Corner: How to protect your organization from emerging AI threats

Last year, an employee in a financial role at multinational engineering firm Arup’s Hong Kong office received a suspicious email requesting a financial transaction.

Bill Becker

The employee was then summoned into a video call with the CFO and other employees. Through this meeting, they convinced him to carry out the request received in the email.

The only problem was that what the employee saw and heard on screen wasn’t real. It was a deepfake. The image and voices of the CFO and others that attended the call were cloned using artificial intelligence.

The result of the deepfake attack: $25 million gone in an instant.

ADVERTISEMENT

What can a business do to deal with something like this?

One thing is to set up a process that requires multiple people to approve financial transactions. You should also have a protocol for verifying that the person, even the CEO, is legitimate.

Lastly, invest in detection tools and employee training.

This is just the tip of the iceberg with things to consider when it comes to AI.

ADVERTISEMENT

For example, how is your organization dealing with shadow AI?

With shadow AI, employees are using unapproved/unauthorized AI tools and platforms. While an employee may think the AI solution they’re using helps them get the job done faster, there are several things going on that introduce risk into your organization.

Here are just a few to consider:

  • Protected data being entered into an unapproved large language model: This jeopardizes both company and client data. It’s both a privacy and compliance issue.
  • Vulnerabilities: Because these tools aren’t vetted, they expand your organization’s attack surface and may contain critical security flaws. That makes your systems more exposed and increases the risk of a cyberattack.
  • Lack of visibility: Not being able to account for your AI assets puts your company in an uncomfortable situation. Should an incident occur, lack of visibility increases the time it takes to respond to something that goes wrong.

So, what can you do to reduce risk in this domain? Here are some ideas to get you started:

ADVERTISEMENT
  • Have clear policies that spell out which tools and systems employees are allowed to use, and how they should use them.
  • Have a vetting process for AI tools and platforms. This process should include understanding how data flows, seeing if output introduces harmful bias that impacts decisions and customers, and if there are any security or privacy concerns.
  • To enhance visibility, establish a system that documents approved AI platforms and monitors any unapproved ones in use within your organization.

Lastly, let’s briefly talk about the threat landscape that’s driven by AI. One of the growing trends is indirect prompt injection attacks.

This is where malformed prompts are hidden in places like websites controlled by a bad actor, or they’re added to social media posts. Then, when some unsuspecting person visits the site and asks their web browser AI assistant to summarize the page, the hidden malformed prompts are executed. This can lead to things like credential theft or data exfiltration.

What does this mean for your business?
This goes back to vetting tools, visibility and policies for using and securing browsers with AI assistants.

As your business begins to integrate AI to help with various functions, it’s also important to reduce the risk new technology brings.

The rapid evolution of AI brings both opportunities and challenges. It is our responsibility as business leaders to navigate these complexities thoughtfully, ensuring that innovation does not come at the cost of security.

Bill Becker is the owner of Connecticut-based information security and intelligence firm Bsquared Intel.

Close the CTA

Black Friday Sale! Get 40% off new subscriptions through Sunday, 11/30!