By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Google is adding a new feature to Google Chrome that will warn users about similar, or lookalike, URLs that a user may visit thinking they are going to the normal site. This feature is designed to warn users when they visit typosquatting domains, IDN Homograph Unicode attacks, scams, and phishing sites. In the current Canary builds of Chrome 74, a new experimental feature has been added that will alert users that they are visiting an URL that may be pretending to be or acting as a "lookalike" to a legitimate URL. For example, URLs like appl3.com, tw1tter.com, or m1crosoft.com. When users go to these URLs, Chrome will display a warning under the address bar stating "Did you mean to go to [ url ]?". For example, you can see that when we tried to go to the appl3.com URL it asks "Did you mean to go to apple.com/?". By default, this feature is only available in the Chrome Canary builds for Chrome 74.  To test the lookalike feature, you can download Chrome Canary and enter chrome://flags into the address bar. At the Experiments page, search for lookalike and then change "Navigation suggestions for lookalike URLs" to Enabled. You will then be prompted to relaunch the browser as shown below. A new Chromium Gerrit post indicates that these lookalike warnings may be moved to their very own browser interstitial instead. Chrome uses interstitial pages to display warnings to users before they actually visit the requested site. Read the whole story on OUR FORUM.

 

Dozens of Android camera applications, some of them with over 1 million installs on the Google Play Store, were serving malicious ads and fake update prompts while also making sure that they won't be uninstalled by hiding their entries from the application list. Lorin Wu, a mobile threats analyst for Trend Micro, sorted these malicious apps in two different categories: some of them were variations of the same camera application designed to beautify photos, while the other kind allowed their users to apply photo filters on their snapshots. These apps have all been removed from the Google Play store by now, but not before they were able to amass millions of installations (some of them most probably fake). All of them were also obviously connected to each other given that they were sharing various design components such as the screenshots added to their Google Play entries. According to Wu, the beauty camera apps detected as AndroidOS_BadCamera.HRX, were "capable of accessing remote ad configuration servers that can be used for malicious purposes." After installation, they would automatically hide from the application list to make sure the victim would not be able to remove them and start displaying adult content and fraudulent content ads using the default web browser after every device unlock event. To add insult to injury, the user would not be able to pinpoint the app that pushed the ads, while some of the advertisements redirected the victims to websites which asked for personal information to be able to collect various fake prizes. Learn more from OUR FORUM.

An investigation has revealed that Facebook has been paying people aged between 13 and 35 to install a data harvesting VPN tool. The "Facebook Research" VPN was offered to iOS and Android users who were paid up to $20 per month -- plus referral commissions -- to provide the social network with near-unfettered access to phone, app, and web usage data (a Root Certificate is installed to give a terrifying level of access). As news of the activity came to light, Facebook has announced that the program (sometimes referred to as Project Atlas) is being terminated on iOS, but it seems that it will be continuing on Android. If this sounds slightly familiar, you just need to think back a few months to when Facebook's Onavo Protect VPN was kicked out of the App Store for violating Apple's data collection rules. The investigation was carried out by TechCrunch. It found that Facebook has been using the research program for some time to "gather data on usage habits". Facebook's Research was made available through a range of beta testing services, and in this way, the app was able to "sidestep" the App Store. TechCrunch says that users were asked to install the app and provide "root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity". Learn more by visiting OUR FORUM.

A serious Apple iOS bug has been discovered that allows FaceTime users to access the microphone and front-facing camera of who they are calling even if the person does not answer the call. To use this bug, a caller would FaceTime another person who has an iOS device and before the recipient answers, add themselves as an additional contact to Group FaceTime. This will cause the microphone of the person you are calling to turn on and allow the caller to listen to what is happening in the room. Even worse, if the person that is being called presses the power button to mute the FaceTime call, the front-facing camera would turn on as well. What this means, is if someone is calling you on FaceTime, they could be listening and seeing what you are doing without you even knowing. BleepingComputer has tested and confirmed that this bug works in iOS 12.1.2 and we were able to hear and see the person. When testing it against an Apple Watch, though, we were not able to get the audio portion of the bug to work. While it is not known who first discovered this bug, numerous people have been posting about it on social media and making video demonstrations as shown below. When 9to5Mac first reported on the bug, they were only able to get the microphone snooping working. Later, BuzzFeed reported that they could also access the front-facing camera and that Apple stated that they are "aware of this issue and we have identified a fix that will be released in a software update later this week." We have the video and instructions on disabling Facetime posted on OUR FORUM.

The new operating system could look more like Windows 7 than 10. Rumors that Microsoft has been developing a simplified and locked down version of Windows for budget devices have been surfacing for a while now. First, the name “Windows Lite” was spotted in a Windows 10 SDK, and then reporter Brad Sams claimed he’d discussed it with Microsoft employees. Now it seems that in addition to structural changes, Windows Lite will also get an aesthetic change that includes dropping live tiles.
According to sources that spoke with Windows Central, Windows Lite will feature a static app launcher in place of Windows 10’s Start menu, much like Chrome OS, Android and iOS. This means that Windows Lite will likely drop support of Live Tiles, a current Windows 10 feature that lets apps stream information in the place of its icon, like live weather information or how much unread email you have. Microsoft apparently has two reasons for cutting the feature: first, no one was using it. Very few users open the start menu to look at live tiles and this means that even major apps aren't focused on taking advantage of the feature. Second, the overall design language of Windows Lite will be simpler to reduce system requirements, and redesigning the Start menu is part of that. As shown in the concept image above, Windows Lite is likely to be more colorful and bring back some of the soft curves and comfortable feel of Windows 7. It’ll also bring back smooth performance on budget systems. Like Google’s Chrome OS, it is expected that Windows Lite will be designed for systems which may only have 32GB of storage or 2GB of RAM. To accomplish this feat Windows Lite may exclusively use Progressive Web Apps, which are apps that are built on a web browser, and Universal Windows Platform apps, which are meant to work on any Microsoft operating system from Windows 10 Mobile to Xbox. This will limit Windows Lite to just the Microsoft Store, but it will save on storage and provide a decent Windows on ARM experience.
Via Techspot

Ignorance is bliss, and it’s often the most ignorant who make the surest decisions, not being encumbered by the knowledge that they could be wrong. In many situations, this is all fine and good, but at the current level of self-driving car development having a Tesla confidently crash into a fire truck or white van (both of which happened) can be rather dangerous. The issue is that self-driving cars are just smart enough to drive cars, but not to know when they are entering a situation outside their level of confidence and capability. Microsoft Research has worked with MIT to help cars know exactly when situations are ambiguous. As MIT news notes, a single situation can receive many different signals, because the system perceives many situations as identical. For example, an autonomous car may have cruised alongside a large car many times without slowing down and pulling over. But, in only one instance, an ambulance, which appears exactly the same to the system, cruises by. The autonomous car doesn’t pull over and receives a feedback signal that the system took an unacceptable action. Because the unusual circumstance is rare cars may learn to ignore them when they are still important despite being rare. The new system, to which Microsoft contributed, will recognize these rare systems with conflicted training and can learn in a situation where it may have, for instance, performed acceptably 90 percent of the time, the situation is still ambiguous enough to merit a “blind spot.” Read much more on OUR FORUM.

 

GTranslate