Security news weekly round-up - 25th April 2025

If you think cybersecurity is an afterthought and you should not care much, well, think again. The myriad of threats out there is overwhelming, to say the least. We do our best to cover the ones that will likely affect you, and we leave you to be vigilant and cautious while navigating online. In this week's review, we are not short of the regulars: malware, artificial intelligence, and online scam. Let's begin. New Android malware steals your credit cards for NFC relay attacks I'll be candid, I am not surprised about malware stealing credit card details. What makes this stand out is the model of distribution. It's Malware as a Service (MaaS) with some social engineering involved. Meaning, that the threat actors still need you to install the malware under the guise that you are installing a legitimate app. What can you take away from this? The following: SuperCard X is currently not flagged by any antivirus engines on VirusTotal and the absence of risky permission requests and aggressive attack features like screen overlaying ensures it stays off the radar of heuristic scans. The emulation of the card is ATR-based (Answer to Reset), which makes the card appear legitimate to payment terminals. Cookie-Bite attack PoC uses Chrome extension to steal session tokens Luckily, it's a Proof of Concept (PoC). If you are unfamiliar with that, it means, this is possible and the affected party should look into it. Sometimes, when researchers develop a PoC like this one, the affected company might respond that it does not pose an "active threat to their users". Or, respond by patching the security gap exploited by the PoC. In this case, it's just some preventive measures stated at the end of the article. Why is this so important? Here is why: Once a cookie is stolen, the attackers inject it into the browser like any other stolen cookie. This can be done through tools like the legitimate Cookie-Editor Chrome extension, which allows the threat actor to import the stolen cookies into their browser under 'login.microsoftonline.com.' After refreshing the page, Azure treats the attacker's session as fully authenticated, bypassing MFA and giving the attacker the same level of access as the victim. How fraudsters abuse Google Forms to spread scams Threat actors abusing legitimate technology for malicious purposes is nothing new. Therefore, articles like this should serve as a reminder for me and you. From the article: If the worst happens and you think you’ve fallen victim to a Google Forms attack, change your passwords, run a malware scan, and tell your bank to freeze any cards (if you’ve submitted card details). Switch on MFA for all accounts if you’ve not already, and monitor your accounts for any unusual activity. Files Deleted From GitHub Repos Leak Valuable Secrets It's never truly gone. If someone decides to search for it, they might find it. That's a two-sentence summary of what's going on in this article. The good part? The researcher got paid for it. Now, what made this possible is, according to the researcher, a lack of understanding of Git, among other things. From the article: To shed light on these risks, Brizinov built an automated tool to clone public repositories, traverse all commits to find deleted files, restore them, and scan them for secrets such as API keys, tokens, and credentials. In addition to platform-specific developer tokens and sessions, and email SMTP credentials, Brizinov discovered tokens for GCP projects, AWS, Slack, GitHub, OpenAPI, HuggingFace, and Algolia. All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack Artificial intelligence models like ChatGPT, Claude, and Gemini do have guardrails that prevent them from producing harmful content. But history has shown, researchers have developed attack techniques to bypass the guardrails. This is one such example. From the article: The universal bypass for all LLMs shows that AI models cannot truly monitor themselves for dangerous content and that they require additional security tools. Multiple such bypasses lower the bar for creating attacks and mean that anyone can easily learn how to take control of a model. Credits Cover photo by Debby Hudson on Unsplash. That's it for this week, and I'll see you next time.

Apr 25, 2025 - 23:49
 0
Security news weekly round-up - 25th April 2025

If you think cybersecurity is an afterthought and you should not care much, well, think again. The myriad of threats out there is overwhelming, to say the least. We do our best to cover the ones that will likely affect you, and we leave you to be vigilant and cautious while navigating online.

In this week's review, we are not short of the regulars: malware, artificial intelligence, and online scam.

Let's begin.

New Android malware steals your credit cards for NFC relay attacks

I'll be candid, I am not surprised about malware stealing credit card details. What makes this stand out is the model of distribution. It's Malware as a Service (MaaS) with some social engineering involved. Meaning, that the threat actors still need you to install the malware under the guise that you are installing a legitimate app.

What can you take away from this? The following:

SuperCard X is currently not flagged by any antivirus engines on VirusTotal and the absence of risky permission requests and aggressive attack features like screen overlaying ensures it stays off the radar of heuristic scans. The emulation of the card is ATR-based (Answer to Reset), which makes the card appear legitimate to payment terminals.

Cookie-Bite attack PoC uses Chrome extension to steal session tokens

Luckily, it's a Proof of Concept (PoC). If you are unfamiliar with that, it means, this is possible and the affected party should look into it. Sometimes, when researchers develop a PoC like this one, the affected company might respond that it does not pose an "active threat to their users". Or, respond by patching the security gap exploited by the PoC.

In this case, it's just some preventive measures stated at the end of the article. Why is this so important? Here is why:

Once a cookie is stolen, the attackers inject it into the browser like any other stolen cookie. This can be done through tools like the legitimate Cookie-Editor Chrome extension, which allows the threat actor to import the stolen cookies into their browser under 'login.microsoftonline.com.'

After refreshing the page, Azure treats the attacker's session as fully authenticated, bypassing MFA and giving the attacker the same level of access as the victim.

How fraudsters abuse Google Forms to spread scams

Threat actors abusing legitimate technology for malicious purposes is nothing new. Therefore, articles like this should serve as a reminder for me and you.

From the article:

If the worst happens and you think you’ve fallen victim to a Google Forms attack, change your passwords, run a malware scan, and tell your bank to freeze any cards (if you’ve submitted card details). Switch on MFA for all accounts if you’ve not already, and monitor your accounts for any unusual activity.

Files Deleted From GitHub Repos Leak Valuable Secrets

It's never truly gone. If someone decides to search for it, they might find it. That's a two-sentence summary of what's going on in this article. The good part? The researcher got paid for it.

Now, what made this possible is, according to the researcher, a lack of understanding of Git, among other things.

From the article:

To shed light on these risks, Brizinov built an automated tool to clone public repositories, traverse all commits to find deleted files, restore them, and scan them for secrets such as API keys, tokens, and credentials.

In addition to platform-specific developer tokens and sessions, and email SMTP credentials, Brizinov discovered tokens for GCP projects, AWS, Slack, GitHub, OpenAPI, HuggingFace, and Algolia.

All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack

Artificial intelligence models like ChatGPT, Claude, and Gemini do have guardrails that prevent them from producing harmful content. But history has shown, researchers have developed attack techniques to bypass the guardrails. This is one such example.

From the article:

The universal bypass for all LLMs shows that AI models cannot truly monitor themselves for dangerous content and that they require additional security tools. Multiple such bypasses lower the bar for creating attacks and mean that anyone can easily learn how to take control of a model.

Credits

Cover photo by Debby Hudson on Unsplash.

That's it for this week, and I'll see you next time.