Hackers now use Chat GPT – Microsoft raises alarm

Microsoft to add ChatGPT to its Azure cloud service

OpenAI and Microsoft made a joint announcement on Wednesday, revealing that government-affiliated hackers from Russia, North Korea, and Iran have recently resorted to utilizing ChatGPT for testing novel techniques to carry out online attacks.

In response, both companies have taken prompt action to close down the associated accounts, intensifying their collective endeavors to thwart any misuse of widely-used AI chatbots by state-sponsored entities.

As a significant supporter of OpenAI, Microsoft employs its own applications and software that leverage the capabilities of large language models (LLM), an AI technology provided by OpenAI.

READ ALSO
Telegram rolls out 9 new features to counter WhatsApp

“The objective of Microsoft’s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT,” Microsoft said.

According to the company, hackers are using LLMs to “advance their objectives and attack technique.”

OpenAI’s services, which include its world-leading GPT-4 model, were used for “querying open-source information, translating, finding coding errors, and running basic coding tasks,” the company behind ChatGPT said in a separate blog post.

READ ALSO
Like Telegram, Snapchat introduces paid subscription at $3.99 per month

According to Microsoft, Forest Blizzard, a group linked to Russian military intelligence, turned to LLMs for “research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine.”

The use was “representative of an adversary exploring the use cases of a new technology,” Microsoft added.

Emerald Sleet, which is linked to North Korea, researched on think tanks and experts associated with the communist regime, as well as content that could be used in online phishing campaigns.

READ ALSO
Easy Steps to Change Your Tariff Plan on Airtel: Get More Data, Calls & Flexibility

Crimson Sandstorm, which is linked to Iran’s Revolutionary Guard, used ChatGPT to programme and troubleshoot malware, as well as to learn how hackers can avoid detection, according to Microsoft.

OpenAI stated that the danger was “limited,” but that the company aimed to stay ahead of the evolving threat.

“There are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits,” OpenAI said.

Stay Connected , follow us on: Facebook: @creebhillsdotcom, Twitter: @creebhillsblog, Instagram: @creebhills, Pinterest: @creebhills, Telegram: @creebhills
To place an advert/Guest post on our site, contact us via [email protected]

LEAVE A REPLY

Please enter your comment!
Please enter your name here