By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

It can happen in the blink of an eye. You put your Android phone down on a counter at the checkout stand or feel a slight bump as you get off the subway, only to later realize your phone is missing. Regardless of how you lose it, be it theft or a simple mistake, losing your phone is a stressful experience. Losing your phone cuts off your access to the rest of the world; it is likely the most personal device you own. Replacing it is a costly nuisance. In the event your phone goes missing, don't panic! There are tools built into every Android phone that make it possible to lock and track down a lost phone with ease. But first, you'll need to take some steps now to set yourself up for success if and when your phone does go missing -- even if you only left it in the house. You can take a few steps now to be ready if you lose your phone. Do yourself a favor and turn on passcode and fingerprint authentication. Do yourself another favor and don't use facial recognition on your Android device. On most Android devices, the technology used for facial recognition can be easily tricked with something as simple as a photo of your face. Google's Pixel 4 and Pixel 4 XL are the exceptions here, as they use a more reliable system similar to Apple's Face ID. Next, create your passcode and set up fingerprint authentication in the Settings app under the Security section. I realize scanning a fingerprint or entering a PIN code every time you want to use your phone can be inconvenient, but the idea of someone having access to your photos, banking apps, email, and the rest of your personal info is downright scary. An extra step to unlock your phone is worth the effort when you consider the potential impact of exposing your personal info to a stranger.  Any time you sign in to an Android device with a Google account, Find My Device is automatically turned on. Google's free Find My Device service is what you'll use should your phone ever go missing to track, remotely lock, and remotely erase it. Check to make sure Find My Device is enabled on your Android phone by opening the Settings app and going to Security & Location > Find My Device. Alternatively, if your device doesn't have a Security & Location option, go to Google > Security > Find My Device. Find My Device should be turned on. If not, slide the switch to the On position. Finally, double-check that the ability to secure and remotely erase the device is turned on by going to android.com/find on your computer, selecting your phone, and clicking Set Up Secure & Erase. A push alert will be sent to your phone -- tap it to finish the setup process. Samsung has long offered a Find My Mobile service to help Galaxy phone owners track down their lost phones. The service is separate from Google's Find My Device offering, and is something you can -- and definitely should -- set up. Not only does it give you a backup service you can use to track down a lost phone, but it also gives you tools that Find My Device doesn't have. With Samsung's service, you can do things like forcing remote backups or see if someone has swapped out your SIM card. You'll need to use your Samsung account to set up Find My Mobile. However, more recently, Samsung announced a new service called SmartThings Find. The new feature works like Apple's Find My app by crowdsourcing the location of a lost device, even if it's offline, but telling nearby Galaxy devices to look for its Bluetooth signal and report its location if it's found. All of which, of course, is done anonymously. As for SmartThings Find, you'll need to have a Galaxy device running Android 8 or newer. The setup process should already be taken care of as long as you're running the latest version of the SmartThings app. I had to go into the Galaxy Store app and update it myself, but once I did that the main page of the SmartThings app had a map showing the last location of my Galaxy Buds ($80 at Amazon), along with other Samsung devices that are linked to my account below the map. If it's not set up automatically, you may have to tap on a SmartThings Find button and follow the prompts to register your device. Once it's turned on, you can view the location of your device(s) by opening the SmartThings app and select SmartThings Find. Read this how-to in its entirety on OUR FORUM.

As the 116th Congress comes to an end, the annual defense authorizing legislation (NDAA) is among its most important pending matters — and tucked within it is the most important internet issue that you’ve probably never heard of. While not as visible as COVID relief or continuing government funding, the massive Fiscal Year 2021 NDAA Conference Committee report addresses many important defense and non-defense issues, including the naming of military bases after Confederate officers, limits on the President’s ability to withdraw troops from Germany and Afghanistan, a threatened presidential veto over the absence of a repeal of Section 230 and much more — to say nothing of the roughly $740 billion in military programs the law would authorize for the current fiscal year. Amid these, both the House and Senate bills and the Conference Report address an important internet issue that is not much discussed and not much understood outside of a small circle of industry, scholarly, military, intelligence, and law enforcement experts. The resolution of the issue (which won’t get the kind of attention that creating a new “National Cyber Director” will get) could have an enormous impact on the shape and future of the entire internet — far beyond the military and defense communities. Labeled “information sharing,” to put it most simply, it’s whether the U.S. Government (or any government) should regulate and control information about cyber threats that is shared by internet (and other) companies with U.S. military, law enforcement, and intelligence agencies — or whether the sharing of cyber threat information by internet companies should continue to be voluntary and led by industry. The issue is often addressed in vague terms, but at its core, it divides American industry, the tech sector, and even the internet industry itself — and its resolution will establish basic rules for how the internet is regulated by the U.S. government and most other governments. The Fiscal 2021 NDAA Conference Report partly addresses this issue and partly postpones it. That’s not surprising, given its complexity and enormous implications for the shape of the internet. Aside from the political fact that nearly everyone supports “cooperation on cybersecurity” between government agencies and internet companies, the debates over mandatory versus voluntary cooperation is further complicated by the fact that serious cyber threats to the U.S. originate not only from a foreign military attack but also from anyone from a bored high school student to a professional crime ring. Cyber threats from any of these could jeopardize large parts of our economy or social structure. So, a major underlying issue in mandatory versus voluntary “information sharing” is that the problem that’s being addressed is not just defending against a foreign military attack on the United States. It is, arguably, defending against any type of cyber threat from anyone. The details are quite complex, but the core issue has been hotly debated for over a decade and even echoes policy debates over industry regulation that go back to the 1980s. Like several other cybersecurity issues, the issue of “information sharing” was highlighted by the recent report of the Cyberspace Solarium Commission, which looked at the full scope of cyber threats to the U.S. and set forth a wide range of proposals to improve America’s cybersecurity. The Commission singled out companies that are part of the “defense industrial base” (which could include quite a large swath of the internet industry) and concluded that they and other internet companies need some form of new, mandatory information sharing for the national security of the United States. Historically, there have been many — mostly in intelligence, law enforcement, and the military — who believe that major internet companies should be legally required to rapidly share information about cyber threats with law enforcement, military, and intelligence agencies. These advocates of mandatory and regulated information sharing are supported by some defense contractors and many businesses that depend on the integrity of the internet for their business. Generally, their view is that whatever drawbacks this form of regulating the internet may have are a small price to pay for the significant increase in security and stability that mandatory and regulated information sharing would offer. For more visit OUR FORUM

In just the last two months, the cybercriminal-controlled botnet known as TrickBot has become, by some measures, public enemy number one for the cybersecurity community. It's survived takedown attempts by Microsoft, a supergroup of security firms, and even US Cyber Command. Now it appears the hackers behind TrickBot are trying a new technique to infect the deepest recesses of infected machines, reaching beyond their operating systems and into their firmware. Security firms AdvIntel and Eclypsium today revealed that they've spotted a new component of the trojan that TrickBot hackers use to infect machines. The previously undiscovered module checks victim computers for vulnerabilities that would allow the hackers to plant a backdoor in deep-seated code known as the Unified Extensible Firmware Interface, which is responsible for loading a device's operating system when it boots up. Because the UEFI sits on a chip on the computer’s motherboard outside of its hard drive, planting malicious code there would allow TrickBot to evade most antivirus detection, software updates, or even a total wipe and reinstallation of the computer's operating system. It could alternatively be used to "brick" target computers, corrupting their firmware to the degree that the motherboard would need to be replaced. The TrickBot operators' use of that technique, which the researchers are calling "TrickBoot," makes the hacker group just one of a handful—and the first that's not state-sponsored—to have experimented in the wild with UEFI-targeted malware, says Vitali Kremez, a cybersecurity researcher for AdvIntel and the company's CEO. But TrickBoot also represents an insidious new tool in the hands of a brazen group of criminals—one that's already used its foothold inside organizations to plant ransomware and partnered with theft-focused North Korean hackers. "The group is looking for novel ways to get very advanced persistence on systems, to survive any software updates and get inside the core of the firmware," says Kremez. If they can successfully penetrate a victim machine's firmware, Kremez adds, "the possibilities are endless, from destruction to basically complete system takeover." While TrickBoot checks for a vulnerable UEFI, the researchers have not yet observed the actual code that would compromise it. Kremez believes hackers are likely downloading a firmware-hacking payload only to certain vulnerable computers once they're identified. "We think they've been handpicking high-value targets of interest," he says. The hackers behind TrickBot, generally believed to be Russia-based, have gained a reputation as some of the most dangerous cybercriminal hackers on the internet. Their botnet, which at its peak has included more than a million enslaved machines, has been used to plant ransomware like Ryuk and Conti inside the networks of countless victims, including hospitals and medical research facilities. The botnet was considered menacing enough that two distinct operations attempted to disrupt it in October: One, carried out by a group of companies including Microsoft, ESET, Symantec, and Lumen Technologies, sought to use court orders to cut TrickBot's connections to the US-based command-and-control servers. Another simultaneous operation by US Cyber Command essentially hacked the botnet, sending new configuration files to its compromised computers designed to cut them off from the TrickBot operators. It's not clear to what degree the hackers have rebuilt TrickBot, though they have added at least 30,000 victims to their collection since then by compromising new computers or buying access from other hackers, according to security firm Hold Security. AdvIntel's Kremez came upon the new firmware-focused feature of TrickBot—whose modular design allows it to download new components on the fly to victim computers—in a sample of the malware in late October, just after the two attempted takedown operations. He believes it may be part of an attempt by TrickBot's operators to gain a foothold that can survive on target machines despite their malware's growing notoriety throughout the security industry. "Because the whole world is watching, they've lost a lot of bots," says Kremez. "So their malware needs to be stealthy, and that's why we believe they focused on this module." To learn more visit OUR FORUM.

Windows 10 isn’t as sluggish and bloated as some versions that have come before. Which means you shouldn’t have any serious performance complaints. Then again, why leave free performance on the table by running unnecessary services? There’s a long list of Windows 10 services that most users don’t need. So you can safely disable these unnecessary Windows 10 services and satisfy your craving for pure speed. Some Common Sense Advice First, Windows services all have specific jobs. Some of these jobs are critical for your computer to work properly. If you disable a Windows service that’s needed for the normal operation of your computer, you can get locked out of your machine or may have to undo what you’ve done. We tested disabling all the unnecessary services listed below via the Services app on our computer. However, we can’t take any responsibility for something going wrong with your specific machine. Don’t mess around with random services not listed here and always create a system restore point or system backup before making changes. We rate a process as “safe to disable” if it doesn’t affect the core functionality of your computer, but we don’t recommend that you actually disable every single one of these services since they are not harmful and can be useful too. Do you have a printer? Do you ever use it? Printers are becoming a niche item as we all transition to paperless documentation and use smartphone cameras to scan documents. If you don’t use a printer then you can safely disable the print spooler. This is a service that manages and queues print jobs. Without any print jobs to process, it just sits there using up RAM and CPU time. Windows Image Acquisition is the service that waits until you press the button on your scanner and then manages the process of getting the image where it needs to go. This also affects communication with digital cameras and video cameras that you connect directly to your computer, so be aware of that if you need this function. Unbelievably, there are actually plenty of businesses that still use fax machines. Fax usage is very niche, however, so it’s almost certain that you don’t need fax services on your computer. If you are one of the five people sending and receiving faxes from your computer, well then this doesn’t apply to you. Also, buy a scanner instead. It’s safe to disable the Bluetooth service if you don’t need it. It can be a precaution against Bluetooth attacks too. These days Bluetooth devices such as mice, game controllers, and headphones are common. So only a small number of users who never use Bluetooth should consider this. Windows Search is safe to disable and can have a noticeable effect on your performance because it also disables the Windows search indexer. It’s not something we recommend most people do, however. Instant, fast search performance is one of the best features of Windows 10. It’s an option if you don’t make much use of Windows search or your CPU is really slow. Go ahead and disable it to see if it boosts performance. Windows sends an error report back to Microsoft when things go wrong. Microsoft uses this information to fix problems in future updates. Some people have a privacy issue with this and choose not to send reports. If you don’t want to send error reports to Microsoft, you can go beyond selecting Don’t send every time and disable the entire service. Disabling these services won’t give you drastic speed boosts. Though, you can get an extra frame or two out of your video games or open even more tabs in your browser. There are several more services you can stop. But, we strongly recommend against messing with the Windows services you are unsure about. It’s especially risky to disable services that are essential to your hardware, such as those related to your graphics card. Always research a given Windows service before you disable it. For more Windows 10 Services that can be disabled visit OUR FORUM.

What do Cristiano Ronaldo, Bruno Mars, and Windows have in common? They're all 35 years old. It is three and a half decades since Microsoft Windows 1.0 was unleashed upon an unsuspecting world. Tottering atop MS-DOS, Windows 1.0 was released on 20 November 1985. A graphical multitasking shell, it would usher in an era of dominance on the PC that lingers on today. The secret sauce was IBM support, which brought a huge chunk of the business world along, for better or worse. Not that dominance was a sure thing back then. Windows 1.0 was by no means the only game in town: this hack has fond memories of GEM (Graphics Environment Manager) which turned up in computers from Amstrad to the Atari ST. At the time Windows was one among many, and it would take a good few iterations before the 3.x line began to dominate. First shown off two years previously, Windows 1.0 would run on 256KB of RAM and a pair of floppy drives (later versions would require a hard disk) and, most significantly, require the user to move a mouse-pointer to make things happen in the 16-bit shell. At least 512KB was needed before performance improved beyond dragging Notepad through treacle. Windows 1.0 also suffered from an initial paucity of apps, with the likes of Calculator and Paint coming in the box while many MS-DOS applications would fire up in full-screen mode. The GUI also insisted on tiling the windows - no overlapping was allowed other than dialogs. After a number of incremental improvements, Windows 1.0 was replaced by Windows 2.0 in 1987, although it lingered on until support for it (as well as versions 2.0 and 3.0) ended in 2001. Windows was the mainstay of Microsoft profits in the 90s, thanks to some sharp elbows on the OEM front from its legal department. The money rolled in, and Redmond wanted more. When smarter mobile phones started kicking off in the late 90s, Microsoft made its first of many failed attempted to break into the mobile market with Windows CE or WinCE as it became known. It didn't last long, despite some notable handsets, but Microsoft kept trying. Enthused by then-CEO Steve Ballmer, who had originally dismissed the iPhone, Microsoft tried again with Windows Phone 7, launched in 2010. Despite excellent hardware from Nokia, which Redmond bought and then gutted, the OS never caught on with developers, and a lack of backward compatibility with the new kit killed demand. As for tablets, Redmond first dipped Windows' toe into the market in 2003 with the Microsoft Tablet PC. Redmond has kept up its interest in this area - and the latest Surface fondle slabs are very nice, if expensive, pieces of kit. Now Windows has evolved into a cloud operating system and is maintaining its position in the mainstream. Microsoft has managed to make the transition from in-box code to cloud better than most, albeit a bit late. Ray Ozzie, hired as Microsoft's cloud guru in 2006, saw the writing on the wall and warned Redmond that Windows would have to get cloudy. He was forced out, although not before founding Azure, the smart folks took note - not least Satya Nadella, who is cloud to the core. It seems odd that senior Windows coders now weren't even a glint in the milkman's eye when the first build of the OS came into being. But the effect of the operating system is undeniable. Looking back, Windows 1.0 was a curiosity in spite of the enthusiasm for the product by Microsoft boss, Bill Gates. Business users were content to stick with DOS while consumers looked to alternatives, including the likes of Atari or Commodore, for their home computing fun. However, Windows 1.0 marked a change for Microsoft and an attempt to focus more on applications. APIs for video and mouse hardware moved things on from the DOS environment and PC software and hardware makers would flock to the platform as the decades rolled by. For better or for worse. Complete details are posted on OUR FORUM.

Last Thursday afternoon, Mac users everywhere began complaining of a crippling slowdown when opening apps. The cause: online certificate checks Apple performs each time a user opens an app not downloaded from the App Store. The mass upgrade to Big Sur, it seems, caused the Apple servers responsible for these checks to slow to a crawl. Apple quickly fixed the slowdown, but concerns about paralyzed Macs were soon replaced by an even bigger worry—the vast amount of personal data Apple, and possibly others, can glean from Macs performing certificate checks each time a user opens an app that didn’t come from the App Store. For people who understood what was happening behind the scenes, there was little reason to view the certificate checks as a privacy grab. Just to be sure, though, Apple on Monday published a support article that should quell any lingering worries. More about that later—first, let’s back up and provide some background. Before Apple allows an app into the App Store, it must first pass a review that vets its security. Users can configure the macOS feature known as Gatekeeper to allow only these approved apps, or they can choose a setting that also allows the installation of third-party apps, as long as these apps are signed with a developer certificate issued by Apple. To make sure the certificate hasn’t been revoked, macOS uses OCSP—short for the industry-standard Online Certificate Status Protocol — to check its validity. Checking the validity of a certificate—any certificate—authenticating a website or piece of software sounds simple enough, but it has long presented problems industrywide that aren’t easy to solve. The initial means was the use of certificate revocation lists, but as the lists grew, their size prevented them from working effectively. CRL gave way to OCSP, which performed the check on remote servers. OCSP, it turned out, had its own drawbacks. Servers sometimes go down, and when they do, OCSP server outages have the potential to paralyze millions of people trying to do things like visit sites, install apps, and check email. To guard against this hazard, OCSP defaults to what’s called a “soft fail.” Rather than block the website or software that’s being checked, OCSP will act as if the certificate is valid in the event that the server doesn’t respond. Somehow, the mass number of people upgrading to Big Sur on Thursday seems to have caused the servers at ocsp.apple.com to become overloaded but not fall over completely. The server couldn’t provide the all-clear, but it also didn’t return an error that would trigger the soft fail. The result was huge numbers of Mac users left in limbo. Apple fixed the problem with the availability of ocsp.apple.com, presumably by adding more server capacity. Normally, that would have been the end of the issue, but it wasn’t. Soon, social media was awash in claims that the macOS app-vetting process was turning Apple into a Big Brother that was tracking the time and location whenever users open or reopen any app not downloaded from the App Store. The post Your Computer Isn’t Yours was one of the catalysts for the mass concern. It noted that the simple HTML get-requests performed by OCSP were unencrypted. That meant that not only was Apple able to build profiles based on our minute-by-minute Mac usage but so could ISPs or anyone else who could view traffic passing over the network. (To prevent falling into an infinite authentication loop, virtually all OCSP traffic is unencrypted, although responses are digitally signed.) Fortunately, fewer alarmist posts like this one provided a more helpful background. The hashes being transmitted weren’t unique to the app itself but rather the Apple-issued developer certificate. That still allowed people to infer when an app such as Tor, Signal, Firefox, or Thunderbird was being used, but it was still less granular than many people first assumed. In an attempt to further assure Mac users, Apple Monday published a post. It explains what the company does and doesn’t do with the information collected through Gatekeeper and a separate feature known as notarization, which checks the security even of non-App Store apps. The post went on to say that in the next year, Apple will provide a new protocol to check if developer certificates have been revoked, provide “strong protections against server failure,” and present a new OS setting for users who want to opt-out of all of this. The controversy over behavior that macOS has been doing since at least the Catalina version was introduced last October underscores the tradeoff that sometimes occurs between security and privacy. Gatekeeper is designed to make it easy for less experienced users to steer clear of apps that are known to be malicious. To make use of Gatekeeper, users have to spend a certain amount of information to Apple. Not that Apple is completely without fault. For one thing, developers haven’t provided an easy way to opt-out of OCSP checks. That has made blocking access to ocsp.apple.com the only way to do that, and for less experienced Mac users, that’s too hard.For more turn to OUR FORUM.