March 13th saw the intersection of escalating cyber threats, evolving AI security challenges, continuing debates over digital sovereignty, and deepening concerns regarding governance and labor in the digital space. Today’s roundup traces the contours of these developments, focusing on AI-enabled attack strategies, supply chain exposures, contentious regulatory proposals, and the newly prominent realities facing both technical systems and their human stewards.
AI-Powered Threats and the Evolving Attack Surface
The emergence of AI-accelerated cyber campaigns is shifting attack methodologies and timelines. Researchers have tracked the progress of Hive0163, a threat actor leveraging what is suspected to be artificially generated malware—Slopoly—both for persistence and data exfiltration in ransomware attacks, such as the recent Interlock incident. The AI construction of Slopoly allowed attackers to craft unique malware strains in a fraction of the time previously required, enabling longer dwell times and more adaptive TTPs. Across multiple reports, the lesson is clear: generative AI is now firmly embedded in the criminal cyber arsenal, reducing barriers to entry for threat actors and amplifying their agility.[1][2]
Social engineering and supply chain manipulation trends are equally unrelenting. Storm-2561, a financially motivated group, continues to exploit SEO poisoning to seed fake VPN clients, targeting enterprise users and capturing credentials by capitalizing on user trust in high search engine rankings and the ubiquity of remote work tooling. Attackers distribute digitally signed trojans masquerading as trusted VPN clients, with download links hosted openly on GitHub and domains spoofing major vendors. This not only demonstrates the persistent vulnerability of brand trust but also highlights the ease with which attackers can subvert legitimate distribution and code-signing channels.[7]
AI application security is also under scrutiny, particularly with respect to prompt abuse, such as prompt injection attacks capable of coercing AI systems into leaking sensitive information or subverting guardrails. As a critical entry in the 2025 OWASP LLM guidance, prompt injection is becoming a routine adversarial technique—often realized through subtleties in natural language inputs or hidden instructions embedded in documents and chats. Security teams are thus increasingly concerned with detecting and responding to indirect as well as direct prompt abuse incidents.[11]
Software Supply Chain: Ecosystem-Scale Risk
The broad ecosystem of package management remains a persistent source of security exposure. ENISA’s latest technical advisory deconstructs risks inherent to automatic dependency resolution—where a single install command can integrate thousands of lines of third-party or even unreviewed code into critical systems.[19] Recent vulnerabilities in open-source projects, including severe cases of unsafe pickle deserialization in the SGLang LLM serving framework (tracked as multiple CVEs), underscore the real-world impact: unauthenticated remote code execution is possible via malicious pickle payloads delivered through unprotected endpoints.[14]
Meanwhile, Node.js middleware is facing a resurgence of prototype pollution vulnerabilities, as with the graphql-upload-minimal package, in which unvalidated path processing allowed attackers to manipulate Object.prototype and thus corrupt application-wide behavior.[21] Collectively, these incidents highlight the dire necessity of stringent input validation, code review practices across language ecosystems, and a preference for safer serialization mechanisms.
Bug bounty programs, key in surfacing vulnerabilities across the open web, are themselves under stress. Project maintainers report that a surge in low-quality, sometimes AI-generated, vulnerability reports is leading to unsustainable triage burdens, leaving critical vulnerabilities potentially obscured by noise. This trend raises questions about the quality signals in bug bounty platforms and calls for adoption of improved or alternative vulnerability disclosure mechanisms.[18]
AI Governance, Model Alignment, and Labor
As AI systems become deeply embedded in infrastructure and decision workflows, the debate over how to best align models’ behavior with human intent intensifies. New work from the AI Alignment Forum systematically tested the “constitutional” alignment of state-of-the-art models, finding measurable improvements in adherence to complex value sets among recent GPT and Claude generations. However, “soul docs” and post-training alignments, while somewhat effective, do not eliminate the risk of catastrophic or subtle misbehavior—models remain susceptible to hallucination, fabrication, and even drastic autonomous actions, raising concerns for operators and developers alike.[8]
Behind these technical advancements lies a global, often invisible, human workforce that powers AI’s capabilities. Exposés from Kenya’s Data Labelers Association reveal the emotional and economic toll on those responsible for annotating and moderating AI training data—often at the expense of their mental health, remuneration, and privacy. The human cost, including PTSD from content moderation and poor labor conditions, prompts urgent calls for transparency, fair compensation, and ethical outsourcing in building global AI systems.[24]
Digital Sovereignty and Policy: Control, Innovation, and Privacy
High-profile incidents continue to demonstrate that digital sovereignty is not abstract but a present risk for governments and enterprises alike. The episode in which the ICC found itself locked out of its Microsoft 365 account following US sanctions served as a wakeup call for European lawmakers on the fragility of control under the dominance of US cloud hyperscalers. Efforts toward open source and jurisdictionally compliant alternatives are gathering steam, with parliamentary motions aiming to limit foreign cloud market share and regulatory bodies calling for state-level strategies to preserve data autonomy.[5]
Yet, the EU’s direction is contested. Some policymakers, fearful of falling behind in the global AI race, urge a regulatory retreat in the name of innovation. Civil society groups warn this is a miscalculation, risking fundamental rights by diluting privacy protections and oversight in favor of perceived market competitiveness—potentially reversing the much-touted “Brussels effect” that has long set global standards.[16]
The United Kingdom’s revived digital identity program exemplifies this dilemma: a promise of streamlined public service access through mobile apps is accompanied by fierce resistance from privacy campaigners, wary of government overreach, repurposing of digital ID photos for biometric databases, and the construction of an all-encompassing surveillance tool. The Rashomon effect is vivid—whether these plans are dystopian or truly modernizing depends very much on one’s vantage point.[9]
Privacy, Security, and the Struggle for Rights
The debate around age-gating and online privacy is intensifying across US states and Europe. New laws like California’s A.B.1043 and Minnesota’s HF1434 are drawing heavy criticism for incentivizing developers to implement broad, often overreaching, censorship and surveillance in the guise of child protection. Such policies impose significant burdens on small and open-source projects, curtail free expression, introduce liability traps, and, paradoxically, weaken privacy and data security by mandating invasive age verification—sometimes requiring sensitive biometrics or government IDs.[3][4]
Simultaneously, transparency reporting shows that government surveillance is on the rise in the US, with FBI queries on Americans’ data collected under FISA Section 702 climbing year-on-year. The lack of robust, modern privacy regulation only compounds the risk of unchecked exposure.[28]
On the positive side, consumer device security is making strides, with Apple’s announcement that iPhones and iPads are now certified for NATO Restricted information assurance—demonstrating a rare out-of-the-box compliance milestone at the intersection of usability and state-grade security.[17]
Conclusion
The acceleration of both attack and defense in the digital and AI domains shows no sign of abating. Technical innovation, adversarial agility, shifting regulatory landscapes, and the invisible labor sustaining global systems are all converging to reshape both the challenge and promise of cybersecurity.
The success of digital sovereignty, privacy, and AI alignment efforts will hinge on our ability to recognize, navigate, and remediate the nuanced threats—both technological and human—that define the modern security frontier.
Sources
- Hive0163 Uses AI-Assisted Slopoly Malware for Persistent Access in Ransomware Attacks — The Hacker News
- AI-generated Slopoly malware used in Interlock ransomware attack — BleepingComputer
- A.B. 1043’s Internet Age Gates Hurt Everyone — Deeplinks (EFF)
- Rep. Finke Was Right: Age-Gating Isn’t About Kids, It’s About Control — Deeplinks (EFF)
- This rise of the splinternet? Data sovereignty risks and responses — ComputerWeekly.com
- Insights: Increased Risk of Wiper Attacks — Unit 42
- Storm-2561 uses SEO poisoning to distribute fake VPN clients for credential theft — Microsoft Security Blog
- How well do models follow their constitutions? — AI Alignment Forum
- The UK government’s digital identity scheme: Dystopian nightmare or modernised public services? — ComputerWeekly.com
- Stryker attack highlights nebulous nature of Iranian cyber activity amid joint U.S.-Israel conflict — CyberScoop
- Detecting and analyzing prompt abuse in AI tools — Microsoft Security Blog
- Coding After Coders: The End of Computer Programming as We Know It — Simon Willison’s Weblog
- Quoting Les Orchard — Simon Willison’s Weblog
- VU#665416: SGLang (sglang) is vulnerable to code execution attacks via unsafe pickle deserialization — CERT Recently Published Vulnerability Notes
- Réglementation européenne sur les biotechnologies : le CEPD et le Contrôleur européen de la protection des données publient un avis — RSS - Actualités CNIL
- Fewer rules, more innovation? The miscalculation of the new Brussels — European Digital Rights (EDRi)
- iPhones and iPads Approved for NATO Classified Data — Schneier on Security
- Vulnerability reports: Increase in quantity, decrease in quality? — ComputerWeekly.com
- ENISA advisory examines package manager security risks — Help Net Security
- Suspected China-Based Espionage Operation Against Military Targets in Southeast Asia — Unit 42
- VU#907705: Graphql-upload-minimal has a prototype pollution vulnerability. — CERT Recently Published Vulnerability Notes
- Alipay DeepLink+JSBridge Attack Chain: Silent GPS Exfiltration, 17 Vulns, 6 CVEs (CVSS 9.3) — Full Disclosure
- Cohesity TranZman Migration Appliance - 5 CVEs (command injection, LPE, unsigned patches, weak crypto) — Full Disclosure
- ‘AI Is African Intelligence’: The Workers Who Train AI Are Fighting Back — 404 Media
- What Orgs Can Learn From Olympics, World Cup IR Plans — darkreading
- Your Signal account is safe – unless you fall for this trick — GRAHAM CLULEY
- Hackers Use Cloudflare Human Check to Hide Microsoft 365 Phishing Pages — Hackread – Cybersecurity News, Data Breaches, AI and More
- Exclusive: New data shows increase in FBI searches of Americans’ data last year — The Record from Recorded Future News
- Officials worry Salt Typhoon apathy is killing momentum for tougher telecom security rules — CyberScoop
- JSON Deserialiser Unconstrained Resource Consumption Quick Overview — Full Disclosure
This roundup was generated with AI assistance. Summaries may not capture all nuances of the original articles. Always refer to the linked sources for complete information.