By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

An upgraded variant of Purple Fox malware with worm capabilities is being deployed in an attack campaign that is rapidly expanding. Purple Fox, first discovered in 2018, is malware that used to rely on exploit kits and phishing emails to spread. However, a new campaign taking place over the past several weeks -- and which is ongoing -- has revealed a new propagation method leading to high infection numbers. In a blog post on Tuesday, Guardicore Labs said that Purple Fox is now being spread through "indiscriminate port scanning and exploitation of exposed SMB services with weak passwords and hashes." Based on Guardicore Global Sensors Network (GGSN) telemetry, Purple Fox activity began to climb in May 2020. While there was a lull between November 2020 and January 2021, the researchers say overall infection numbers have risen by roughly 600% and total attacks currently stand at 90,000. The malware targets Microsoft Windows machines and repurposes compromised systems to host malicious payloads. Guardicore Labs says a "hodge-podge of vulnerable and exploited servers" is hosting the initial malware payload, many of which are running older versions of Windows Server with Internet Information Services (IIS) version 7.5 and Microsoft FTP. Infection chains may begin through internet-facing services containing vulnerabilities, such as SMB, browser exploits sent via phishing, brute-force attacks, or deployment via rootkits including RIG. As of now, close to 2,000 servers have been hijacked by Purple Fox botnet operators. Guardicore Labs researchers say that once code execution has been achieved on a target machine, persistence is managed through the creation of a new service that loops commands and pulls Purple Fox payloads from malicious URLs. The malware's MSI installer disguises itself as a Windows Update package with different hashes, a feature the team calls a "cheap and simple" way to avoid the malware's installers being connected to one another during investigations. In total, three payloads are then extracted and decrypted. One tampers with Windows firewall capabilities and filters are created to block a number of ports -- potentially in a bid to stop the vulnerable server from being reinfected with other malware. An IPv6 interface is also installed for port scanning purposes and to "maximize the efficiency of the spread over (usually unmonitored) IPv6 subnets," the team notes, before a rootkit is loaded and the target machine is restarted. Purple Fox is loaded into a system DLL for execution on boot. Purple Fox will then generate IP ranges and begin scans on port 445 to spread. "As the machine responds to the SMB probe that's being sent on port 445, it will try to authenticate to SMB by brute-forcing usernames and passwords or by trying to establish a null session," the researchers say. The Trojan/rootkit installer has adopted steganography to hide local privilege escalation (LPE) binaries in past attacks. To learn more visit OUR FORUM.

Looking to use your phone in an emergency? Modern smartphones and smartwatches allow you to set certain features that will ping your last known location to emergency contacts in a situation where you’re unable to talk on the phone. Both Apple and Google have baked these features into the respective iOS and Android platforms, and we’re seeing more and more wearable manufacturers include the features too. Why would you want to set up emergency SOS location tracking? There’s a variety of scenarios where you may not be able to talk on a phone, but you will be able to find a way to send your location to trusted individuals. You can also have an easy way to directly call the emergency services through these features too, so they’re worth setting up for when you may need them in the future. This guide will teach you how to set up the equivalent features on your iPhone, Android phone, or an alternative such as wearables from Garmin and Apple. Not all fitness trackers or wearables sport these features, but most smartphones do. Emergency SOS is already available when you take an iPhone out of the box, but there are some ways you can set it up to work better. It works in all countries, but in some places, you may only be able to choose one particular emergency service. First off, making an emergency services call is simple from an iPhone, but the way it works differs depending on the model of iPhone you have. If you own an iPhone 8 or later (that’s if your phone came out after 2017) you can hold down the side button and the volume buttons. Then, you’ll find a slider on the screen that says “Emergency SOS”. If you drag this across, it’ll make an immediate call to the emergency services. If you can’t slide this across, continue to hold down the buttons and you’ll find your phone makes an alert noise with a countdown. That countdown will finish with the phone calling the emergency services, so this is particularly useful if you can’t take your phone out of a pocket. We would encourage you to set up emergency contacts (more on that below) as it will then message your contacts immediately afterward with your location information and more. Why would you want an emergency contact? First off, it can help emergency services identify who to contact, and on Apple devices, these people will immediately receive a message of your location after your call with the emergency services. To set this up, click on the Health app and press on the profile picture. In here, you’ll find an option called Medical ID and at the bottom of the page, you’ll find an option called emergency contacts. Here is where you can enter the information of the contact, a relationship, and their phone number as well. Tap on done afterward, and you’ve set up your emergency contact. You can have several of these on your iPhone at one time. On Android phones, these features differ depending on the manufacturer. You can often find the information you need by searching in your phone’s Settings for phrases such as SOS messages or simply the word emergency. For example, Samsung phones have a feature called Send SOS Messages that allows you to press the side key three times to automatically message someone with your location. It will automatically attach pictures using your rear and front camera, as well as an audio recording of the moments before the message was sent. For more detailed instructions on various devices visit OUR FORUM.

Today, researchers have exposed common weaknesses lurking in the latest smart sex toys that can be exploited by attackers. As more as more adult toy brands enter the market, given that the COVID-19 situation has led to a rapid increase in sex toy sales, researchers believe a discussion around the security of these devices is vital. In examples provided by the researchers, technologies like Bluetooth and inadequately secured remote APIs make these IoT personal devices vulnerable to attacks that go beyond just compromising user privacy. ESET security researchers Denise Giusto Bilić and Cecilia Pastorino have shed light on some weaknesses lurking in smart sex toys, including the newer models. The main concern highlighted by the researchers is, that newer wearables like smart sex toys are equipped with many features such as online conferencing, messaging, internet access, and Bluetooth connectively. This increased connectivity also opens doors to these devices being taken over and abused by attackers. The researchers explain most of these smart devices feature two channels of connectivity. Firstly, the connectivity between a smartphone user and the device itself is established over Bluetooth Low Energy (BLE), with the user running the smart toy's app. Secondly, the communication between a remotely located sexual partner and the app controlling the device is established over the internet. To bridge the gap between one's distant lover and the sex toy user, smart sex toys, like any other IoT device, use servers with API endpoints handling the requests. "In some cases, this cloud service also acts as an intermediary between partners using features like chat, videoconferencing and file transfers, or even giving remote control of their devices to a partner," explained Bilić and Pastorino in a report. But, the researchers state that the information processed by sex toys consists of highly sensitive data such as names, sexual orientation, gender, a list of sexual partners, private photos and videos, among other pieces, which, if leaked can adversely compromise a user's privacy. This is especially true if sextortion scammers get creative after getting their hands on such private information. More importantly, though, the researchers express concern over these IoT devices being compromised and weaponized by the attackers for malicious actions, or to physically harm the user. This can, for example, happen if the sex toy gets overheated. "And finally, what are the consequences of someone being able to take control of a sexual device without consent, while it is being used, and send different commands to the device?" "Is an attack on a sexual device sexual abuse and could it even lead to a sexual assault charge?" Bilić and Pastorino further stress. To demonstrate the seriousness of these weaknesses, the researchers conducted proof-of-concept exploits on the Max by Lovense and We-Vibe Jive smart sex toys. Both of these devices were found to use the least secure "Just Works" method of Bluetooth pairing. Using the BtleJuice framework, and two BLE dongles, the researchers were able to demonstrate how a Man-in-the-Middle (MitM) attacker could take control of the devices and capture the packets. The attacker can then re-broadcast these packets after tampering with them to change settings like vibration mode, intensity, and even inject their other commands. Likewise, the API endpoints used to connect a remote lover (sexual partner) to the user make use of a token which wasn't awfully hard to brute-force. Want more visit OUR FORUM.

After spending more than a decade building up massive profits off targeted advertising, Google announced on Wednesday that it’s planning to do away with any sort of individual tracking and targeting once the cookie is out of the picture. In a lot of ways, this announcement is just Google’s way of doubling down on its long-running pro-privacy proclamations, starting with the company’s initial 2020 pledge to eliminate third-party cookies in Chrome by 2022. The privacy-protective among us can agree that killing off these sorts of omnipresent trackers and targeters is a net good, but it’s not time to start cheering the privacy bona fides of a company built on our data—as some were inclined to do after Wednesday’s announcement. As the cookie-kill date creeps closer and closer, we’ve seen a few major names in the data-brokering and adtech biz—shady third parties that profit off of cookies—try to come up with a sort of “universal identifier” that could serve as a substitute once Google pulls the plug. In some cases, these new IDs rely on people’s email logins that get hashed and collectively scooped up from tons of sites across the web. In other cases, companies plan to flesh out the scraps of a person’s identifiable data with other data that can be pulled from non-browser sources, like their connected television or mobile phones. There are tons of other schemes that these companies are coming up with amid the cookie countdown, and apparently, Google’s having none of it. “We continue to get questions about whether Google will join others in the ad tech industry who plan to replace third-party cookies with alternative user-level identifiers,” David Temkin, who heads Google’s product management team for “Ads Privacy and Trust,” wrote in a blog post published on Wednesday. In response, Temkin noted that Google doesn’t believe that “these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions.” Based on that, these sorts of products “aren’t a sustainable long term investment,” he added, noting that Google isn’t planning on building “alternate identifiers to track individuals” once the cookie does get quashed. What Google does plan on building, though, is its own slew of “privacy-preserving” tools for ad targeting, like its Federated Learning of Cohorts, or FLoC for short. Just to get people up to speed: While cookies (and some of these planned universal ID’s) track people by their individual browsing behavior as they bounce from site to site, under FLoC, a person’s browser would take any data generated by that browsing and basically plop it into a large pot of data from people with similar browsing behavior—a “flock,” if you will. Instead of being able to target ads against people based on the individual morsels of data a person generates, Google would allow advertisers to target these giant pots of aggregated data. We’ve written out our full thoughts on FLoC before—the short version is that like the majority of Google’s privacy pushes that we’ve seen until now, the FLoC proposal isn’t as user-friendly as you might think. For one thing, others have already pointed out that this proposal doesn’t necessarily stop people from being tracked across the web, it just ensures that Google’s the only one doing it. This is one of the reasons that the upcoming cookiepocolypse has already drawn scrutiny from competition authorities over in the UK. Meanwhile, some American trade groups have already loudly voiced their suspicions that what Google’s doing here is less about privacy and more about tightening its obscenely tight grip on the digital ad economy. To learn more turn your attention to OUR FORUM.

It may have taken some time, but 5G is slowly starting to build momentum in the US. All major carriers now have nationwide 5G deployments covering at least 200 million people, with T-Mobile in the lead covering over 270 million people with its low-band network at the end of 2020. Verizon ended the year with a low-band network that covered 230 million, while AT&T's version reached 225 million. Next-generation networks from all the major carriers are set to continue to expand in the coming months, laying the foundation for advancements such as replacing home broadband, remote surgery, and self-driving cars that are expected to dominate the next decade. But with all that activity by competing carriers, there are myriad different names for 5G -- some of which aren't actually 5G. The carriers have had a history of twisting their stories when it comes to wireless technology. When 4G was just coming around, AT&T and T-Mobile opted to rebrand their 3G networks to take advantage of the hype. Ultimately the industry settled on 4G LTE. One technology, one name. Differing technologies and approaches for presenting 5G, however, have made this upcoming revolution more confusing than it should be. Here's a guide to help make sense of it all. When it comes to 5G networks, there are three different versions that you should know about. While all are accepted as 5G -- and Verizon, AT&T, and T-Mobile have pledged to use multiple flavors going forward for more robust networks -- each will give you different experiences. The first flavor is known as millimeter-wave (or mmWave). This technology has been deployed over the course of the last two years by Verizon, AT&T and T-Mobile, though it's most notable for being the 5G network Verizon has touted across the country. Using a much higher frequency than prior cellular networks, millimeter-wave allows for a blazing-fast connection that in some cases reaches well over 1Gbps. The downside? That higher frequency struggles when covering distances and penetrating buildings, glass, or even leaves. It also has had some issues with heat. Low-band 5G is the foundation for all three providers' nationwide 5G offerings. While at times a bit faster than 4G LTE, these networks don't offer the same crazy speeds that higher-frequency technologies like millimeter-wave can provide. The good news, however, is that this network functions similarly to 4G networks in terms of coverage, allowing it to blanket large areas with service. It should also work fine indoors. In between the two, mid-band is the middle area of 5G: faster than the low band, but with more coverage than millimeter wave. This was the technology behind Sprint's early 5G rollout and one of the key reasons T-Mobile worked so hard to purchase the struggling carrier.  The company has worked diligently since closing the deal, quickly deploying its mid-band network across the United States. The company now covers over 100 million people with the faster service, with a goal of reaching 200 million people before the end of 2021. T-Mobile has said that it expects average download speeds over the mid-band network to be between 300 to 400Mbps, with peak speeds of 1Gbps. While T-Mobile, AT&T, and Verizon have plenty of low-band spectrum, mid-band has previously been used by the military, making it a scarce resource despite its cellular benefits. Thankfully even with the name change in marketing and ads, the icons on phones and devices will remain the same. "Our customers will see a simple 5G icon when connecting to the next-generation wireless network, regardless of which spectrum they're using," said a T-Mobile spokesman. Complete details can be found on OUR FORUM.

A previously undetected piece of malware found on almost 30,000 Macs worldwide is generating intrigue in security circles, and security researchers are still trying to understand precisely what it does and what purpose its self-destruct capability serves. Once an hour, infected Macs check a control server to see if there are any new commands the malware should run or binaries to execute. So far, however, researchers have yet to observe the delivery of any payload on any of the infected 30,000 machines, leaving the malware’s ultimate goal unknown. The lack of a final payload suggests that the malware may spring into action once an unknown condition is met. Also curious, the malware comes with a mechanism to completely remove itself, a capability that’s typically reserved for high-stealth operations. So far, though, there are no signs the self-destruct feature has been used, raising the question of why the mechanism exists. Besides those questions, the malware is notable for a version that runs natively on the M1 chip that Apple introduced in November, making it only the second known piece of macOS malware to do so. The malicious binary is more mysterious still because it uses the macOS Installer JavaScript API to execute commands. That makes it hard to analyze installation package contents or the way that the package uses the JavaScript commands. The malware has been found in 153 countries with detections concentrated in the US, UK, Canada, France, and Germany. Its use of Amazon Web Services and the Akamai content delivery network ensures the command infrastructure works reliably and also makes blocking the servers harder. Researchers from Red Canary, the security firm that discovered the malware, are calling the malware Silver Sparrow. “Though we haven’t observed Silver Sparrow delivering additional malicious payloads yet, its forward-looking M1 chip compatibility, global reach, relatively high infection rate, and operational maturity suggest Silver Sparrow is a reasonably serious threat, uniquely positioned to deliver a potentially impactful payload at a moment’s notice,” Red Canary researchers wrote in a blog post published on Friday. “Given these causes for concern, in the spirit of transparency, we wanted to share everything we know with the broader infosec industry sooner rather than later.” Silver Sparrow comes in two versions—one with a binary in mach-object format compiled for Intel x86_64 processors and the other Mach-O binary for the M1. So far, researchers haven’t seen either binary do much of anything, prompting the researchers to refer to them as “bystander binaries.” Curiously, when executed, the x86_64 binary displays the words “Hello World!” while the M1 binary reads “You did it!” The researchers suspect the files are placeholders to give the installer something to distribute content outside the JavaScript execution. Apple has revoked the developer certificate for both bystander binary files. Silver Sparrow is only the second piece of malware to contain code that runs natively on Apple’s new M1 chip. An adware sample reported earlier this week was the first. Native M1 code runs with greater speed and reliability on the new platform than x86_64 code does because the former doesn’t have to be translated before being executed. Many developers of legitimate macOS apps still haven’t completed the process of recompiling their code for the M1. Silver Sparrow’s M1 version suggests its developers are ahead of the curve. Once installed, Silver Sparrow searches for the URL the installer package was downloaded from, most likely so the malware operators will know which distribution channels are most successful. In that regard, Silver Sparrow resembles previously seen macOS adware. It remains unclear precisely how or where the malware is being distributed or how it gets installed. The URL check, though, suggests that malicious search results may be at least one distribution channel, in which case, the installers would likely pose as legitimate apps. For more turn to OUR FORUM.