LLMs & Malicious Code Injections: 'We Have to Assume It's Coming'

Large language models promise to enhance secure software development life cycles, but there are unintended risks as well, CISO warns at RSAC.

"LLM" in a dialogue box; "LARGE LANGUAGE MODEL" below it
Source: Bakhtiar Zein via Alamy Stock Vector

A rise in prompt injection engineering into large language models (LLMs) could emerge as a significant risk to organizations, an unintended consequence of AI discussed during a CISO roundtable discussion on Monday. The panel was held during Purple Book Community Connect–RSAC, an event at this week's RSA Conference in San Francisco.

‍One of the three panelists, Karthik Swarnam, CISO at ArmorCode, an application security operations platform provider, believes incidents arising from prompt injections in code are inevitable. "We haven't seen it yet, but we have to assume that it is coming," Swarnam tells Dark Reading. 

Socially Engineered Text Alerts 

LLMs trained with malicious prompting can trigger code that pushes continuous text alerts with socially engineered messages that are typically less adversarial techniques. When a user unwittingly responds to the alert, the LLM could trigger nefarious actions such as unauthorized data sharing.

"Prompt engineering will be an area that companies should start to think about more and invest in," Swarnam says. "They should train people in the very basics of it so that they know how to use it appropriately, which would yield positive results."

Swarnam, who has served as CISO of several large enterprises including Kroger and AT&T, says despite concerns about the risks of using AI, most large organizations have begun embracing it for operations such as customer service and marketing. Even those that either prohibit AI or claim they're not using it are probably unaware of down-low usage, also known as "shadow AI."

"All you have to do is go through your network logs and firewall logs, and you'll find somebody is going to a third-party LLM or public LLM and doing all kinds of searches," Swarnam says. "That reveals a lot of information. Companies and security teams are not naive, so they have realized that instead of saying 'No' [to AI usage] they're saying 'Yes,' but establishing boundaries."

One area in which many companies have embraced AI is incident response and threat analytics. "Security information and event management is definitely getting disrupted with the use of this stuff," Swarnam says. "It actually eliminates triaging at level one, and in a lot of cases at level two as well."

Adding AI to Application Development 

When using AI in application development tools, CISOs and CIOs should establish what type of coding assistance is practical for their organizations based on their capabilities and risk tolerance, Swarnam warns. "And don't ignore the testing aspects," he adds.

It is also important for leaders to consistently track where their organizations are failing and reinforce it with training. "They should focus on things that they need, where they are making mistakes — they are making constant challenges as they do development work or software development," Swarnam says.

About the Author(s)

Jeffrey Schwartz, Contributing Writer

Jeffrey Schwartz is a journalist who has covered information security and all forms of business and enterprise IT, including client computing, data center and cloud infrastructure, and application development for more than 30 years. Jeff is a regular contributor to Channel Futures. Previously, he was editor-in-chief of Redmond magazine and contributed to its sister titles Redmond Channel Partner, Application Development Trends, and Virtualization Review. Earlier, he held editorial roles with CommunicationsWeek, InternetWeek, and VARBusiness. Jeff is based in the New York City suburb of Long Island.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights