Threat actors will take advantge of ChatGPT, says expert

Spread the love


Microsoft, software developers, law enforcement agencies, banks, students writing essays, and almost everyone else thinks they can take advantage of chatgpt.

So threaten the actors.

The artificial-intelligence-powered chatbot touted as the search engine that will unseat Google, help developers generate impeccable code, write the next great rock hit… Heck, it’s so new that people can’t imagine Can’t do what it can do.

But history shows that rogues and nation-states will try to leverage any new technology to their advantage, and no infosec professional should expect any different.

A threat researcher based in Israel says cyberintThey better be prepared.

Samuel Gihon said, if ChatGPT will help software companies write better code, it will do the same for malware creators.

What’s more, he said, it could help them reverse-engineer security applications.

“As a threat actor, if I can improve my hacking tools, my ransomware, my malware every three to four months, my developing time can be cut in half or more. So defense vendors who play cat and mouse Let’s play the game of dangerous actors, it can be difficult for them.

The “if” in that sentence isn’t due to the capability of the tool, he said, but the capabilities of the threat actor using it. “AI in the right hands can be a very strong tool. Professional threat actors, ransomware groups and espionage groups will probably make better use of this tool than amateur actors.

“I’m pretty sure they’ll find great uses for this technology. It’ll probably help them reverse engineer the software they’re attacking … they’ll be able to find new vulnerabilities and bugs in their own code in less time.” Help me.

And infosec professionals shouldn’t just worry about ChatGPT, he said, but about any tool powered by artificial intelligence. “Tomorrow another AI engine will be released,” he said.

“I’m not sure security vendors are ready for this rate of innovation on the part of threat actors,” he said. “It is something we should prepare ourselves for. I know AI is already embedded in security technology, but I am not sure it is at this stage.

He advised that security vendors should think about how threat actors could use ChatGPT against their applications. “If some of my products are open source or my front-facing infrastructure is built on Engine X, I need to know what ChatGPT says about my technology. I need to know how to translate ChatGPT capabilities into the eyes of threat actors.

At the same time, CISOs should look to see if the tools can be leveraged to help protect their environment. One possibility: Software Quality Assurance.



Source link


Spread the love

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.