Microsoft warns that US adversaries are starting to use artificial intelligence in offensive cyber campaigns

FILE PHOTO: A man holds a laptop while cyber code is projected in this illustration photo taken May 13, 2017. Reuters/Caspar Pempel/Illustration/File Photo

Microsoft He said on Wednesday that America’s adversaries (primarily Iran and North Korea and to a lesser extent Russia and China) are beginning to use their generative artificial intelligence to set up or organize offensive cyber operations,

The tech giant and business partner OpenAI noted that they jointly detected and disrupted the use of their AI technologies by malicious cyber actors, shutting down their accounts.

In a blog post, Microsoft said that the technologies used were in “early stages” and were not “particularly innovative or unique”., but it was important to expose them publicly as US rivals take advantage of large language models to expand their ability to break into networks and exert influence. operations.

Cybersecurity companies have long been using machine learning in the defense sector, primarily to detect unusual behavior on networks. But criminals and aggressive hackers also use it, and the introduction of large language models, led by OpenAI’s ChatGPT, has escalated the game of cat and mouse.

Microsoft has invested billions of dollars in OpenAI, and Wednesday’s announcement coincided with the release of a report that said generative AI is expected to improve malicious social engineering, Which will lead to more sophisticated deepfakes and voice cloning, The threat to democracy comes in a year when more than 50 countries will hold elections, which will amplify the misinformation already happening.

FILE PHOTO: The ChatGPT logo is visible in this illustration taken on February 3, 2023. Reuters/Dado Ruvik/Illustration/File photo

Below are some examples provided by Microsoft. In each case, it said all Generative AI accounts and assets of the named groups were disabled:

– The North Korean cyber espionage group known as Kimsuki has used the model to investigate foreign think tanks studying the country and to generate material used in phishing hacking campaigns.

, Iran’s Revolutionary Guard has used long-language models to aid in social engineering, fix software bugs, and even study how intruders can avoid detection on a compromised network Are., This involves generating phishing emails, “one of which claims to come from an international development agency and the other attempts to lure prominent feminists to a feminism website created by the attackers.” AI helps speed up and boost email production.

, Russian military intelligence unit GRU Known as Fancy Bear, it has used models to investigate satellite and radar technologies that may be related to the war in Ukraine.

– The Chinese cyber espionage group known as Aquatic Panda, which targets a wide range of industries, higher education and governments from France to Malaysia, has interacted with models “in ways that limited exploration “suggests how LLMs can enhance their operations.” Techniques.”

– Chinese group Maverick Panda, which has targeted US defense contractors, among other sectors, for more than a decade, interacted with the long-language model, suggesting it was talking about “potentially sensitive topics, including high -” was evaluating their effectiveness as a source of information. Profile people, regional geopolitics, US influence and internal affairs.”

In a separate blog published on Wednesday, OpenAI said it is currently Chatbot Model GPT-4 It “offers only limited incremental capabilities for malicious cybersecurity actions beyond what could be achieved without already publicly available tools and AI technology.”

Cybersecurity researchers hope that changes.

Last April, Jane Easterly, director of the US Cybersecurity and Infrastructure Security Agency, told Congress that “There are two threats and challenges that define an era. One is China and the other is Artificial Intelligence.”

Easterly said at the time that the United States needs to ensure that AI is designed with security in mind.

Critics of ChatGPT’s public release in November 2022 (and subsequent releases from competitors such as Google and Meta) argue that it was introduced irresponsibly hastily, given that security was largely under development in its development. Till was an afterthought.

“Of course, bad actors are using the long-language model; This decision was taken when Pandora’s box was opened,” said Amit Yoran, CEO of cybersecurity firm Tenable.

Some cybersecurity professionals complain about Microsoft building and selling tools to plug vulnerabilities in large language models, when it could focus more responsibly on making them more secure.

,Why not create safe, basic black box LLM models instead of selling defensive tools for the problem they are helping to create?asked Gary McGrath, a cybersecurity veteran and co-founder of the Berryville Institute for Machine Learning.

Edward Amoroso, a New York University professor and former AT&T chief security officer, said that although the use of AI and large language models may not pose an immediately obvious threat, over time they could become one of the most powerful weapons in every nation’s military offensive. Will become one. -State.”

(With information from AP)

Source link

About Admin

Check Also

Brooke Shields and her 17-year-old daughter are conquering the red carpet Famous S Fashion

Brooke Shields (New York, 58) attended the Tribeca Ball, a charity event hosted by The ... Read more

Leave a Reply

Your email address will not be published. Required fields are marked *