‘Vibe-Hacking’ is now a top threat


“Agent and systems are armed.”

It is one of the first regular Antonopic reports of informing threat, which is detailed today, as detail the detail of the wide range of cases where Claude – and probably many leading leading agic agents and chatbots.

First: “Vibe-Hacking”. One sophisticated cybercrypmima that anthropically says that claude code was recently disrupted, anthropic and encoding agent, for extortion of data from at least 17 different organizations around the world within a month. Hakošene parties included health organizations, emergency services, religious institutions, and even government entities.

“If you are a sophisticated actor, otherwise it would be necessary for the team of sophisticated actors, such as a vibe-hacked case, with the help of agencial systems,” Jacob Klein said, Head Anthropic Anthropic’s Intelligence Edge In an interview. He added that in this case, Claude “executed surgery end to the end.”

Anthropic wrote in the report that in such cases, AI “serves as a technical consultant and active operator, allowing attacks that would be difficult and longer for individual actors that individual actors are hand-performed manually.” For example, Claude was specially used to write “psychologically targeted extortion requirements.” Then the cybercriminals realized how much data were, which included data on health care, financial information, government credentials and more – it would be worth in the dark web and giving ranking requirements $ 500,000 per anthropy.

“This is the most sophisticated use of agents I saw … for a cyber offense,” Klein said.

In another case study, Claude helped North Korean IT workers falsely obtained business in Fortune 500 companies in the United States to finance the weapon program in the country. Typically, in such cases, North Korea tries to use people who have been able to communicate in English, in Klein – but he said that in this case, the barrier for people in North Korea to bring technical interviews.

With the help of Claud, Klein said, “We see people who do not want to communicate, most of work, most work, most of work, most of work, most of the work, most work that actually work, most of the work, most The job, most of the work, most work, most work, most work, most work, most work, most work, most work, most work, most work, most work in progress “actually work with Claud.”

Another case study included Romanian scam. Telegram Bot with more than 10,000 monthly users are advertised claude as a “high ex model” for help creating emotionally intelligent messages, allegedly for fraud. It enabled non-domestic English speakers to write persuasive, free messages to gain victims’ trust in the US, Japan and Korea and asked them for money. One example in the report showed that the user loaded a picture of a man in a tie and are looking for him to best praise him.

In the report, Anthropic recognizes that although the company “developed sophisticated measures and security measures, and although they are measured” general effective “, they still fail to find and although they are” mainly efficient “, although they still managed to find. Anthropic says that AI lowered barriers for sophisticated ciber crime and the technology to victims of profiles, automate their practices, analyze the stolen data, stolen, stolen data on stolen credit card and much more.

Each of the case studies in the report adds increasing evidence that AI, they are trying as they can, often cannot continue social risks associated with techniques that create and go into the world. “While specific to Clauda, ​​the case studies presented below will reflect consistent patterns of behavior through all border and models,” report.

Anthropic said that for each case studio has banned the associated accounts, created new classifications or other detection measures and shared information, such as intelligence agencies, Klein confirmed. He also said that case studies His team saw a part of the breadth of change in AI risk.

“There’s this shift that happens where and the systems are not just a chatbot, because now they can take more steps,” Klein said, adding: “able to actually implement actions or activities.”

0 Comments

Follow topics and authors From this story to see this more in your personalized walk page and receiving email updates.




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *