Basically anywhere that LLMs are implemented… they are a security vulnerability, for any situation in which they are not sandboxed.
Anything they can interface with?
You can probably trick it or exploit it into doing something unintended or unexpected to anything else it is connected to.
Theoretically you could use an LLM to do something like come up with more accurate heuristics for identifying malware.
But… they’re nowhere near ‘intelligent’ enough to like, give it a whole code base for some kind of software, and thoroughly make that software 100% secure.
Basically anywhere that LLMs are implemented… they are a security vulnerability, for any situation in which they are not sandboxed.
Anything they can interface with?
You can probably trick it or exploit it into doing something unintended or unexpected to anything else it is connected to.
Theoretically you could use an LLM to do something like come up with more accurate heuristics for identifying malware.
But… they’re nowhere near ‘intelligent’ enough to like, give it a whole code base for some kind of software, and thoroughly make that software 100% secure.