Graybeard
Well-Known Member
Not sure why anyone is that surprised by this. Many humans are basically evil
Sleeper
And the beat goes on ...
"Illicit large language models (LLMs) can make up to $28,000 in two months from sales on underground markets, according to a study published last month in arXiv, a preprint server owned by Cornell University.That’s just the tip of the iceberg, according to the study, which looked at more than 200 examples of malicious LLMs (or malas) listed on underground marketplaces between April and October 2023. The LLMs fall into two categories: those that are outright uncensored LLMs, often based on open-source standards, and those that jailbreak commercial LLMs out of their guardrails using prompts.
...The existence of such malicious AI tools shouldn’t be surprising, according to Wang. “It’s almost inevitable for cybercriminals to utilize AI,” Wang says. “Every technology always comes with two sides.”"
Sleeper
And the beat goes on ...