Here’s why it doesn’t matter which coding language developers choose to learn

Whether in the form of CSS, servers, or platforms, developers today are inundated with options for building software. “With all of the new technologies being thrown at us,” said Jay Harris, founder of Arana Software, “we haven’t quite figured out how to parse them out.”

So, where to start?

On Thursday at the seventh annual Code PaLOUsa event in Louisville, KY, Harris spoke to a room of more than 500 people in the software development community, including developers, architects, UX designers, business analysts, project managers, testers, DevOps engineers, and more.

“How do we know which technology to pursue? What will work? What will be beneficial for our workplace? What’s going to go away?” Harris asked. “We’re developer puppies, chasing the technology squirrels.”

In running a custom app development shop, Harris deals with these questions on a daily basis. It’s not just a question of what his company should work with, Harris said, but what his clients want to work with.

Making matters more difficult, the cool things that happened yesterday, are quickly obsolete, Harris said. For developers, that can translate into a fear of trying out something new.

“We want to make right choices with tech for tomorrow, but we know we screwed it up yesterday,” he said.

Harris used the example of Microsoft’s Silverlight to illustrate the point. Silverlight, which debuted in 2007, was meant to be a golden ticket for streaming. But it wasn’t better, wasn’t stronger, and certainly wasn’t more secure, Harris said. And as ZDNet reported, Microsoft quietly began backing away from the platform. “Technologies go away,” he said. “It happens all the time. It’s supposed to happen, but we’re afraid of it.”

Harris framed developers’ struggles in personal relationship terms. “We’re afraid of technology abandonment,” he said, worried that one tech will leave us as quickly as it appeared. Or we have a “fear of rejection”—that our developer cohort won’t stick by us. Harris also pointed to a “fear of commitment” when it comes to tech. “What if I can’t handle it?” he asked. “What if it comes with baggage?”

screen-shot-2017-06-09-at-11-06-37-am.png

The important thing, Harris stressed, is to overcome these fears and just plunge right in.

Technologies disappearing can sometimes be a great thing, he added. We need to give ourselves permission to try new things.

Harris also argued that we are using the wrong benchmark for success.

“Stop viewing level of success as proficiency,” said Harris. “The level of success is learning something.”

*Disclaimer: CBS Interactive, TechRepublic’s parent company, is a sponsor of the 2017 Code paLOUsa event.

Most HCI isn’t true hybrid cloud, despite what vendors may tell you

hybridcloud.jpg

I’ve been watching an alarming trend repeat itself in vendor marketing. It seems as if every HCI vendor is beginning to equate purchasing a hyperconverged infrastructure (HCI) platform with deploying a hybrid cloud. We’ve seen similar cloud washing play out with the private and public clouds. At the height of cloud washing, it seemed that every vendor that offered a service via the internet labeled themselves cloud providers.

Similarly, I saw CIOs claim victory on cloud migration by simply deploying VMware vSphere. As services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud become more commonplace, a consensus of what is, and is not, cloud formed. I believe the industry must experience a similar shake out for hybrid cloud as it relates to HCI.

SEE: VMware vSphere: The smart person’s guide (TechRepublic)

True hybrid cloud

I think it’s important to define hybrid cloud. I won’t attempt to make an NIST-like definition for hybrid cloud, but I’ll try to provide a commonsense acid test for hybrid cloud instead.

It’s now a common understanding that cloud consists of more than just technology. Cloud is a relatively new model of delivering IT services. If you look at the consumers of public cloud, the target audience is broad. Consumers of public cloud services range from business users to IT staff and service providers.

In building private cloud infrastructure, the goal is to provide a self-service IT offering. In theory, a private cloud provides the capability of AWS, Azure, or Google on-premises. Consumers of a private cloud include end users and internal IT. Overall, cloud should remove the friction between the cloud consumer and the underlying IT services.

Hybrid cloud requires the combination of multiple cloud services into an integrated experience. The most common combination includes private and public cloud. A hybrid cloud model allows the expansion of private cloud services to the public cloud. Cloud managers have a single control panel to manage both the private and public cloud.

Building a true private cloud has proven to be a difficult task, and there have been many causalities along the road to private cloud success. HPE, Cisco, VMware, and Dell EMC have all experienced major shifts in their private cloud strategies over the years. Even the ambitious open source project OpenStack experienced a major refocus as enterprises encountered the difficulties.

HCI’s role

Webscale was a common marketing term tossed around at the beginning of the HCI trend. HCI provides simple building blocks for building private cloud infrastructures. Infrastructure expansion was one of the complexities end user organizations experienced in maintaining a private cloud. By combining storage and compute, HCI eliminated the architectural scale challenges of private cloud. However, HCI itself doesn’t provide the services of a public cloud.

HCI makes infrastructure simple to build and manage. But the focus isn’t on the broad customer base of the leading cloud providers. For example, I haven’t come across an HCI solution where a business user receives access to a portal to provision and de-provision services. I also haven’t come across an HCI provider that creates an API to all the data center services needed to manage the modern data center.

Today, leading HCI solutions focus on making deploying and managing on-premises storage and compute resources easier. The value is simplified IT operations. The end result may indeed include a more agile IT infrastructure. However, HCI doesn’t transform the end user experience. End users may enjoy decreased wait times and improved uptime as a result of HCI, but HCI doesn’t change the experience of how to consume internal IT or external public cloud services. HCI does little to nothing to reduce friction between the consumers and IT.

When discussing hybrid cloud with your HCI vendors, ask the basic question of how the end user experience changed with the implementation of their product. If the answers focus on time to value versus direct consumption, then the solution isn’t what I’d call true hybrid cloud.

How to find text on a Safari webpage from your iPhone

iphone7t.jpg

The mobile web browsing experience has become the de facto standard for many smartphone-toting folks. According to Pew Research, one in 10 American adults is a smartphone-only internet user, without traditional broadband access.

With mobile browsing, the browser itself is often modified to make it more conducive to use on a smartphone. Unfortunately, this often makes it difficult to find certain features or tools that one may use when browsing on a desktop.

One such feature is the ability to find text on a webpage. On a standard keyboard, a user can simply type Control + F (Windows) or Command + F (Mac) to find what they’re looking for. On a smartphone, though, it’s not that simple.

SEE: How to improve the security and privacy of your iPhone: 5 steps

Thankfully, the process for finding text on a Safari webpage on an iPhone isn’t terribly complicated. Here’s how to do it on iOS 10.3.2.

Start by opening the Safari application. Look for the icon with a blue compass and tap on it.

text1.jpgtext1.jpg

Once you have the application open, tap into the grey URL bar at the top of the screen and type in the URL of the website you want to visit. For this example, we’re going to use our favorite website, techrepublic.com. After you’ve entered the URL, tap on the blue Go button at the bottom right corner of the screen.

text2.jpgtext2.jpg

Once you’ve made it to the website, navigate to the webpage that you want to search. Once there, tap on the share button at the bottom of the screen that looks like a box with an arrow coming out of it. If the button isn’t there, it could be because you’ve scrolled down. Try scrolling back up to the top of the page, and it should show back up.

text3.jpg

From here, you should see three rows of icons. On the bottom row of icons (the white and grey ones), swipe to the left until you see the Find on Page icon and tap it.

text4.jpgtext4.jpg

At this point a grey keyboard with a search bar built into it should appear at the bottom of the screen. Tap in the search bar and then type the word or phrase that you want to search. The tool should then highlight all the instances of it on the page in yellow.

To navigate among the highlighted instances on the page tap the up and down arrows immediately to the left of the search bar.

When you’re finished searching, simply tap the grey Done button to the right of the search bar and the Find on Page tool will disappear.

10 ways to protect your Windows computers against ransomware

nevarppistock-614154000.jpg

Malware has been around for decades now. And as our reliance on computing systems has grown, so too has malware proliferation. While antivirus applications were once the key element in preventing infections from occurring (and subsequently spreading), malware has evolved over time in various ways, similar to how our computer usage has changed.

With the changes to malware and its behaviors, the methods of detection and protection have had to be modified to prevent infections from assorted malware types, like spyware, ransomware, and adware—and in the case of zero-days, to mitigate the impact while limiting the exposure as much as possible.

With the recent WannaCry ransomware infection affecting users on an international scale, the stakes are extremely high for those who rely on technology to protect their data at all costs. This is especially true of critical systems, such as those that provide life-saving care in hospitals, infrastructure used to manage utilities, and information systems used in government services.

The approach to data security is not a one-size-fits-all solution, as it varies based on the organization’s needs and the resources available to it. Consideration must also be given to complying with any regulations that may exist specific to your industry.

With that said, safeguards are merely that. The risk associated with malware infections is always present, as risk can’t be eliminated. But applying multiple security applications as a layered solution provides comprehensive protection on several fronts to minimize the threat of a potential outbreak in accordance with best practices.

SEE: 98% of WannaCry victims were running Windows 7, not XP

1: Patch management for clients and servers

Keeping current with Windows Updates ensures that your clients and servers will be patched against any known threats. Vulnerabilities that exist in the form of zero-days will not be covered since that is not possible—and yet the WannaCry infection managed to infect more than 150 countries at such an alarming rate, despite a patch having been readily available almost two months prior to the attack.

With patch management playing such a crucial role in on-going system protection, there is no end to the tools available to organizations —small, medium, or large—to help ensure that their systems are current. First-party tools available from Microsoft, such as Windows Server Update Services, which is included as a service of Windows Server or Systems Center Configuration Manager (SCCM), can manage patches, from deployment to remediation with included reporting on the status of all managed devices for first- and third-party applications.

2: Security software and hardware appliance updates

As stated previously, each organization will have differing needs and resources available to best manage the network and its data. While some commonalities exist, such as firewalls and intrusion prevention systems (IPSes), these devices provide filtering of traffic at the ingress/egress of the network. Alongside firmware updates and signatures, these devices also offer manual configuration to better suit your network’s protection requirements.

Active monitoring of the health of these devices, along with updating configurations as necessary to match the network’s needs, will result in enhancing the network’s security posture and help enable the security appliance to stave off attacks.

While these devices may not necessarily be Windows-based devices, I included them here because of the real-world benefit they provide in helping to mitigate unauthorized network intrusions and to fend off attacks.

SEE: I infected my Windows computer with ransomware to test RansomFree’s protection

3: Hardening device security

Hardening clients and servers is imperative to limit the attack surface from internal or external attacks. The process of hardening a Windows client will differ from a Windows server, in that the aim for their use can vary drastically.

By assessing what the devices will be used for, you can determine how the device should be locked down from a security standpoint. Keep in mind that any applications, services, and connected devices that are not needed or that are deprecated (such as the SMBv1 protocol that allowed the WannaCry exploit to proliferate) should be considered a potential attack vector that may be exploited and should be disabled immediately.

Microsoft offers the Microsoft Baseline Security Analyzer (MBSA) for clients and servers alike to perform vulnerability assessments for devices and the services that run atop them. It also makes recommendations on how to harden them for the utmost security without compromising services. For newer OSes, such as Windows 10 and Windows Server 2012/2016, MBSA will still work, though it may be used in conjunction with the Windows Server Manager app to identify compliance with best practices, troubleshoot configuration errors, and identify operating baselines used to detect variations in performance, which may be an indicator of a compromised system.

4: Data backup management

Let’s face it, a computer is only as reliable as the data it works with. If said data has become compromised, corrupt, or otherwise lost its integrity—say through encryption by ransomware—it will cease to be useful or reliable.

One of the best protections against ransomware in general is a good backup system. As a matter of fact, several backup systems are better still. Since data can be backed up to several different media at once, an incremental backup to a local drive that you can transport with you, alongside a constant backup to cloud storage with versioning support, and a third backup to a network server with encryption provides ample redundancy so that if your local drive becomes compromised, you still have three possible data sets to recover from.

The Backup And Restore Utility native to Windows clients and servers provides a lightweight solution for backing up local data across multiple storage types. Meanwhile, OneDrive offers excellent cloud backup capability. Third-party software to centrally manage data backups across an organization or to/from the cloud is available from several providers as well.

5: Encryption for data at rest and in motion

Encrypting data on the whole will not prevent your computer from ransomware infections, nor will it prevent a virus from encrypting the already encrypted data should the device become infected. Be that as it may, some apps use a form of containerization to sandbox data that is encrypted, rendering it completely unreadable by any process outside the container application’s API.

This is extremely useful for data at rest since it prevents outside access unless it’s through the designated application. But it does nothing for data in motion or data that is being transferred over the network. In cases where transmission is required, the de facto standard is virtual private networking (VPN), since it creates an encrypted tunnel by which to send/receive data to/from, ensuring data is protected at all times.

6: Secured network infrastructure configurations

Unfortunately, the network is often set up and configured during the installation period of new hardware and then it’s left to operate unchecked until something fails. Networking equipment, including routers, switches, and wireless access points, require updated firmware and proper configuration, along with proactive monitoring to address trouble points before they become full-blown issues.

As part of the configuration process, an optimized network will be set up for Virtual LANs (VLANs) or segment traffic and should be managed to ensure that data gets where it needs to go in the most efficient manner possible. Another security benefit of VLANs is the ability to logically quarantine malicious traffic or infected hosts so that they can’t spread the infection to other devices or parts of the network. This enables administrators to deal with compromised hosts without risk of spreading the infection or to simply shut down the specific VLAN altogether to effectively cut off the device(s) from the internet until remediation has occurred.

SEE: System monitoring policy (Tech Pro Research)

7: Network, security, acceptable use, and data recovery policies

Policies are often used by larger organizations to enforce compliance with rules and regulations by their employees. However, besides being a document that dictates the rules of the workplace, policies can also serve as guidelines for end users to follow before an attack takes place and as a survival guide during and after an attack occurs.

While policies do not inherently stop malware at a technical level, if written properly they can address known issues or concerns with respect to data security and arm employees with useful information that could prevent an infection from spreading. Policies may also direct them to provide feedback to IT support to remedy a reported issue before it becomes a larger problem.

Policies should always be considered “drafts” in a sense. Technology is dynamic and ever changing, so the policies that are in effect must change too. Also, be mindful of any restrictions or regulations that may apply to your field. Depending on the industry, writing policies can get tricky and should be addressed with management (and perhaps legal) teams for accuracy and compliance.

8: Change management documentation

As with instituting policies, there is no direct correlation between documenting change management process (or recording all changes to clients/servers, including patch deployment, software upgrades and baseline analyses) and preventing ransomware outright.

However, detailing changes made to systems configurations, along with the other measures previously listed, can have a profound effect on IT’s ability to respond to threats proactively or reactively. Furthermore, it allows for adequate testing and measurement of results that any changes made to systems has on services provided and uptime availability. Lastly, it offers a record of the changes made (alongside their results), which administrators, contractors, and other support personnel can review to determine the cause of some issues or possibly address their recurrence in the future.

For a comprehensive set of documentation to be useful, you need input from various support teams—including systems and network administrators, help desk staff, and management—to create a documentation process that is effective yet simple to follow and easy to manage.

9: End-user training

Never underestimate the value of proper training for all staff, not just IT. Protecting against malware is not solely IT’s job. It’s everyone’s responsibility since it affects everyone and can be essentially brought on by anyone at the organization.

Considered a preventative measure, training that focuses on identifying possible malware attacks, such as phishing, can be an effective tool in preventing malware campaigns against your organization from compromising sensitive data.

End-user training should center not just on identifying malware attack attempts, but should also target mitigation techniques that users can take to prevent or slow down infections should they suspect their computers have been compromised. Finally, no training is complete without informing users about the organization’s expectations with respect to their responsibilities on reporting issues the instant they spot something out of the norm.

SEE: Information security incident reporting policy (Tech Pro Research)

10: Risk management assessments

The aim of a risk assessment (RA) and risk management (RM) process is to identify internal and external threats (also called hazards) and the equipment and services that are affected by them, as well as to analyze their potential impact. The management portion of RA involves evaluating this data to prioritize the list of risks and identify the best plan of action in mitigating them.

RA and RM can help you pinpoint the trouble spots and implement an ongoing plan to prevent these issues from negatively affecting your organization. At the very least, RA/RM allows IT to focus its efforts on aligning the company’s resources with the devices that pose the greatest threat if compromised, such as mission-critical systems.

This process enables IT, management, and compliance/regulation entities to best determine the path forward in identifying equipment, mitigating hazards, determining the order in which to resolve threats, and evaluating the assessment itself so that procedures can be updated and corrective actions modified as risks change over time.

Why Windows 10 S won’t run Linux distributions

linux10s.jpg

If you were wondering whether or not you’ll be able to run a Linux distribution on Windows 10 S, the answer is a resounding “No!”

In a Thursday blog post, Microsoft senior program manager Rich Turner explained that Windows 10 S wasn’t created to be used as a primary tool for hackers and IT pros. In fact, he wrote, it’s aimed at non-technical users, and command-line apps, shells, and Consoles aren’t allowed to run on the platform.

SEE: Securing Linux policy template (Tech Pro Research)

In early May 2017, Microsoft announced that certain Linux distros would be coming to the Windows Store and Windows Subsystem for Linux (WSL). Because Windows 10 S only works with apps from the Windows Store, this prompted many users to ask if the distros in question would be made available on Windows 10 S, Turner wrote.

Windows 10 S has been “deliberately constrained” to prevent certain types of tasks from running, Turner wrote. And that limits the abilities of certain developers and admins who may need access to particular local machine features or to run certain scripts.

Still, Turner wrote, “Windows 10 S can be used for building code that runs elsewhere – on the web, on IoT devices, on a remote VM via ssh, etc. Such scenarios don’t require the user access/modify a local machine’s system, settings/registry, filesystem, etc.”

Regarding the ability of Windows 10 S to only run apps from the Windows Store, Microsoft makes a distinction among the type of apps that are available. Modern Universal Windows Platform (UWP), which runs in a secure sandbox, and Desktop Bridge apps, which have broader OS access but are vetted by the publisher, will both work fine. However, Linux distro store packages are a different type of app package altogether, Turner’s post noted.

Once these distros are installed, he wrote, they are treated as command-line tools and cannot run in the same type of secure environment. As such, they cannot run on Windows 10 S.

So, if you want to run a Linux distro, the only option if to upgrade to the full version of Windows 10.

The 3 big takeaways for TechRepublic readers

  1. Windows 10 S users will not be able to run Linux distros, even though some are available through the Windows Store.
  2. Windows 10 S is created for non-technical users, and command-line apps, shells, and Consoles aren’t allowed to run on the system.
  3. Users who want to run Linux distros will have to upgrade to the full version of Windows 10.

6 technology trends that will change the way business leaders think about risk

istock-584210406.jpg

As cloud computing, Internet of Things (IoT), and artificial intelligence (AI) gain prominence in the enterprise, tech and business leaders alike must reconsider risk management plans and how they impact business objectives, according to French Caldwell, a former White House cybersecurity advisor, former Gartner fellow, and current vice president and chief evangelist for MetricStream.

In the past, risk management meetings occurred quarterly or even annually, and ongoing monitoring was rare. This is not wise to do with emerging technologies and digital transformation efforts underway at many companies, Caldwell said. “The risk to your current business initiatives changes over time,” Caldwell said. “If you start a new business initiative, there are going to be new risks. You need to identify risks up front, but monitor those risks to your business objective on an ongoing basis.”

The top drivers for governance, risk management, and compliance (GRC) investment are improving overall risk oversight, and new businesses introducing new risk, according to recent MetricStream surveys. “Back five years ago, neither of those would have been near the top of the list—it would have been regulatory compliance,” Caldwell said. “But today, people want to make sure they have the right risk intelligence, and that they understand the impact of risk and regulations on investments in new business initiatives.”

Accordingly, CEOs are increasingly involved with enterprise risk management today, Caldwell said. However, “there is often a disconnect between how tech leaders think and how business leaders think around things related to risk management and compliance,” he added.

SEE: Risk Management: Enabling the Business (Tech Pro Research)

For example, consider compliance from the CISO and the CEO’s point of view. For the CISO, compliance is about making sure IT security controls are effective, and that testing is happening properly and is being documented in the event of an audit. But business leaders often think of compliance as regulatory risk, and the risk of new rules or non-adherence to the rules on their ability to achieve business objectives.

“A lot of organizations are starting to get more mature around that—we see CISOs and CIOs looking at linking IT risk and controls to business objectives and processes, to demonstrate those links and eventual impact of those IT risks on those business objectives,” Caldwell said.

Here are the top six technology trends identifying in a recent Metric Stream report that are confronting GRC professionals with new challenges:

1. The transition to the modern cloud and hyperconvergence

As cloud computing grows in popularity, the landscape is moving toward XaaS—everything as a service. This will transform business value chains, as data will be able to flow seamlessly and securely across different platforms and infrastructures. The transition will welcome a new era of risks, regulations, and governance requirements. “Companies will need to not only strengthen their focus on data privacy, security, and vendor management, but also improve the transparency of audits, legal, and regulatory compliance, while refining business continuity planning,” the report stated.

2. Pervasiveness of artificial intelligence (AI)

The risk intelligence gathered from AI and machine learning platforms will lead to gains in performance management at many levels, according to the report. “GRC technology will need to evolve to keep pace with these expanding data sets and varied risks,” the report stated. “Solutions will need to transform to help businesses manage risk and compliance effectively and pervasively across the organization.”

3. Evolution of the Internet of Things (IoT)

With a predicted 20.8 billion connected devices in use by 2020, new GRC challenges abound. IoT developers often overlook security, with the Mirai botnet demonstrating how dangerous this can be. “If we are to truly benefit from IoT in the future, we need to think of new ways of securing these devices,” the report stated.

4. Blockchain layering in GRC

The use of blockchain technology is growing across many industries. Future tech tools will be able to provide a way to connect to blockchain exchanges, providing governance over and visibility into data, according to the report. “Companies will be able to leverage blockchains to streamline the exchange of risk and compliance related information in real time, while also flagging discrepancies,” the report stated.

5. The new economy

Businesses will drive the formation of new industries, as we’ve seen with the creation of Uber and autonomous vehicles. These new industries will require new regulations and governance requirements, the report noted, and GRC technologies will need to adapt to the changing landscape.

6. The new workforce

As workforces become more mobile, businesses will require new frameworks to deal with the risks and requirements in terms of security, authentication measures, infrastructure security, data encryption, and country-specific regulations, the report said.

Why patching Windows XP forever won’t stop the next WannaCrypt

wannacry-talos.jpg

The effects of WannaCrypt ransomware attack were far-reaching. Europol dubbed it “the largest ransomware attack observed in history”, with more than 200,000 victims in 150 countries. Computer systems were knocked offline in hospitals across England, European car plants, in Russian banks and Chinese schools and colleges.

But does Microsoft have the power to mitigate the effects of a similarly devastating attack by changing how it patches old systems? On the face it, it appears so.

In the aftermath of the WannaCrypt attack, Microsoft took the extraordinary step of patching Windows XP, Windows Server 2003 and other unsupported OSes, to fix the flaw that WannaCrypt exploited to infect systems.

However, supported versions of Windows received this same patch from Microsoft back in March. Had that patch been applied to unsupported versions of Windows at that time it’s possible the scale of the WannaCrypt infection could have been significantly reduced, particularly as a single machine infected with WannaCrypt attempts to spread ransomware to every machine on its network.

Obviously Microsoft hasn’t got the resources to patch every flaw in every operating system it’s ever released. The company told TechRepublic that, in this instance, it had taken the extraordinary step of patching unsupported operating systems ‘given the potential impact to customers and their businesses’.

SEE: Ransomware: The smart person’s guide

But because of the huge consequences of outbreaks on the scale of WannaCrypt, shouldn’t Microsoft consider patching the most severe flaws, as defined by the Common Vulnerability Scoring System, in all operating systems, even those that have fallen out of support?

If it could curtail another major outbreak on the scale of WannaCrypt, isn’t it worth trying? After all, Microsoft has compared the vulnerability that WannaCrypt exploited to a Tomahawk missile. Such a move would also help shield those affected who were unable to upgrade from older versions of Windows because newer versions weren’t supported by specialised equipment their organization relies upon.

Writing in the New York Times, Zeynep Tufekci said this is precisely the sort of approach that Microsoft should take.

However, security experts point out that such a move could inadvertently actually worsen global IT security.

“The question whether Microsoft should proactively patch its unsupported operating systems against the most severe vulnerabilities is a very good one and not as simple as it may seem,” said Ziv Mador, VP of security research for SpiderLabs Trustwave.

“Clearly, once an attack of the magnitude we’re currently experiencing with WannaCry starts, it makes perfect sense for Microsoft to release patches also for the vulnerable end-of-life versions. It would be unwise to let the worm spread without releasing a patch because it clearly can help organizations and consumers protect themselves quickly and effectively.”

Unforeseen repercussions

But the unintended consequence of Microsoft proactively patching the worst bugs in old operating systems could be a greater number of individuals and businesses feeling it was safe to carry on using what would still be a fundamentally insecure operating system, he said.

Firstly, these systems would remain unprotected against the multitude of malware that exploited less severe, unpatched flaws in the OS, according to Mador. On top of this, he said, Microsoft keeps improving security technologies in Windows, adding new defense layers, such as the forthcoming Windows Defender Application Guard.

“That means that computers running later versions of Windows are significantly at lower risk of being successfully exploited and infected,” he said, citing Microsoft research that found newer versions of Windows have lower malware infection rates.

“If Microsoft constantly and proactively releases security updates also for the older unsupported versions of Windows, that can end up with more organizations and users not upgrading to supported ones.”

“Providing security updates to EOL [end of life] versions of Windows is therefore a double-edged sword. From the security perspective, it has a positive impact in the short term but may have a negative effect overall.”

He added that malware that replicates itself to other computers, dubbed “worms”, rarely hit the scale of WannaCrypt.

“The last significant worm that propagated through a Windows vulnerability was Conficker, back in 2008.”

Patching these older systems could also be undesirable for the organizations involved, according to Javvad Malik, security advocate for AlienVault.

“Microsoft has done the right thing by making the patch available even for older, unsupported systems. But it shouldn’t proactively push out the patches, as there are usually some business reasons why companies are still running old and unpatched systems,” he said.

“By forcefully pushing a patch, it could do just as much harm, causing systems and applications to become unreliable.”

David Chismon, senior security consultant at MWR InfoSecurity, felt that it would be unfair to place the burden of patching old systems, even only for the most severe flaws, on Microsoft.

“Continuing to support outdated operating systems costs Microsoft significantly as each patch has to be tested rigorously to reduce the risk of the patch stopping something working. It is not reasonable to expect a company to support a product forever, particularly when not paying them to do so.”

A better solution would be for companies that cannot upgrade for financial or software compatibility reasons to keep these unsupported machines offline and on a separate network from the rest of the organization, he said.

According to a survey by Spiceworks, some 52% of businesses were still running Windows XP in 2017, despite support ending three years previously.

Biometric mobile payments will hit $2B this year

istock-635964714.jpg

Fingerprint and selfie pay are on the rise: The number of mobile payments authenticated by biometrics will rise to nearly $2 billion this year, up from $600 million in 2016, according to a report from Juniper Research, released Monday.

Apple Pay kicked off the initial growth of biometric mobile payments, allowing customers to make payments in stores and on apps using their fingerprint to access banking information. Android Pay and Samsung Pay furthered the movement toward these payments, the report noted.

Opportunities for biometric pay have been boosted by the growing availability of fingerprint sensors on phones and tablets, the report found. About 60% of smartphone models are expected to ship with fingerprint sensors this year, with many Chinese vendors incorporating them into mid-range models as well, Juniper Research noted.

But fingerprints aren’t the only biometric payment solution expected to rise: Mastercard’s Identity Check Mobile service, set to go live later this year, allows users to scan their fingerprints or take a selfie to validate their identity and make a payment. After a soft launch in 2016, Mastercard surveyed users, and found that 74% of respondents said biometrics like fingerprints or selfies were easier to use than traditional passwords. And 90% said they believed they would use biometrics for online payment security in the future.

SEE: Mobile app development policy (Tech Pro Research)

Further, India’s identification authority recently released an app through which merchants can verify a customer’s ID via either fingerprint or iris scan, which also links back to the customer’s bank account for payments.

Mobile payments are booming: A recent report from IEEE stated that this payment method could officially kill cash by as early as 2030, as TechRepublic’s Conner Forrest reported. Increasing options for payment authentication could help speed up mobile payment adoption rates, Forrest noted.

The key challenge for biometric mobile payment service providers will be striking the right balance between convenience and security, said Windsor Holden, head of forecasting and consultancy in a press release. “Typically, the more secure the solution, the more time-consuming the authentication process,” Holden said in the release. “It is essential to offer a range of verification options allowing clients to determine what level of security is required for a given authentication.”

The 3 big takeaways for TechRepublic readers

1. The number of mobile payments authenticated by biometrics will rise to nearly $2 billion this year, up from $600 million in 2016, according to a report from Juniper Research.

2. The growth of fingerprint sensors and other biometric technologies in smartphones and tablets has enabled the biometric payment market.

3. Security is a key challenge for biometric mobile payment service providers, the report noted.

STEM is great, but here’s why an English degree might be a smarter bet

istock100950235large.jpg

Three years ago, I wrote that every tech company needs an English major, someone to express machine-readable tech in human-understandable language. This, however, isn’t the only reason to look to the Humanities for employees. Or, for that matter, for students to follow their inner poet, even as software keeps eating the world.

As venture capitalist Scott Hartley has posited, “we need to double down on the liberal arts” because “they are what give us the context with which we apply the new tools and our very human comparative advantage, even in a world in which machines continue to get smarter and smarter.” In other words, STEM jobs are nice but somewhat subject to robotic replacement, while liberal arts majors will always be needed to apply human reasoning to the machines.

The robots are coming

It’s become a truism that machines will threaten jobs because, well, it’s true. Forrester tallied up 24.7 million jobs getting the axe due to machines by 2027. The silver lining is that those same machines will yield 14.9 million jobs, leaving a “mere” 9.8 million Americans without jobs. Which jobs? As Conner Forrest wrote, “manual labor repetitive menial tasks will be the most impacted.”

SEE: Research: Automation and the future of IT jobs (Tech Pro Research)

Lest you think this is something for low-wage workers to exclusively fret about, high-end jobs in the medical field and elsewhere are also on the chopping block, as I’ve written. A radiologist, for example, spends much of her time pattern-matching x-ray images, something that a machine can do more efficiently (and accurately).

Even jobs that aren’t displaced by machines will be impacted by them. According to a 2016 McKinsey Global Institute report, “while automation will eliminate very few occupations entirely [5% of jobs] in the next decade, it will affect portions of almost all jobs to a greater or lesser degree.” How much? By McKinsey’s reckoning, approximately 30% of tasks within 60% of jobs would change.

I’d rather just…sing!

Which brings us to English majors. And Humanities undergrads. And others who are more comfortable with Virginia Woolf than Javascript. (If you haven’t read To the Lighthouse, put away that C++ code and read it. Lovely.) Because, as Hartley said, “Those many tasks within a solid majority of jobs that will be immune to machine automation are those that cannot be sufficiently defined and programmed.”

SEE: Video: When it comes to automation filling jobs, is the age of AI different? (TechRepublic)

Further explaining, he wrote:

Such tasks require creativity and original thought, intuition, coordination, communication, empathy and persuasion. In other words, humans might not perform rote tasks like guiding giant trucks to pick up piles of ore, or even elementary data collection. But they will ask questions of the data, help frame parameters, test hypotheses, collaborate with teammates across departments, and communicate results with compassion to clients.

This isn’t to suggest that liberal arts degrees absolutely trumps STEM jobs; technical literacy is critical to effectively programming computers to give us the data we need. However, just because one comes out of school with a STEM-oriented degree isn’t enough. Indeed, as Hartley noted, “Rote computer programming has already become a cheap commodity, purchased quickly and easily on the global market. And it is itself increasingly becoming automated.”

No, what is needed, regardless of one’s degree, is humanity. It’s the thinking and feeling that makes us human, and able to interact with other humans, that cannot be automated away by machines.

So dig into your React Native. Embrace MongoDB. But also spend time with John Steinbeck, Flannery O’Connor, and Sylvia Plath. In this way, you’ll be better able to put the machines to work for you, rather than instead of you.

Industrial IoT’s global market to reach $934B by 2025

istock-612622172.jpg

The global industrial Internet of Things (IoT) market is predicted to reach $933.62 billion by 2025—up from $109 billion in 2016, according to a report from Grand View Research, released Monday.

Adoption of industrial IoT models has grown worldwide, stemming from the technology’s ability to reduce costs and increase productivity, process automation, and time-to-market, the report noted. The affordability and availability of processors, sensors, and other technologies that can facilitate access to real-time information is also key for IoT adoption. Manufacturers are now leveraging the benefits of industrial IoT solutions to consolidate their control rooms, to track assets, and to improve their analytics functionalities through predictive maintenance, according to the report.

General Electric, IBM, Cisco, Siemens, and Intel dominated the global market share in 2016, the report found.

“The ever growing need to enhance operational efficiency coupled with a strong alliance between the key industry players is expected to drive the market,” the report stated. “With the evolution of the society toward an integrated digital-human workforce, the industrial internet is presumed to incorporate significant opportunities for growth over the next eight years.”

SEE: Research: Big data and IOT – Benefits, drawbacks, usage trends (Tech Pro Research)

The managed industrial IoT services segment is also predicted to grow over the next eight years, as implementation of IoT technology requires integration of these managed industrial IoT services throughout the ecosystem, the report noted.

“IIoT is helping businesses across the world to improve worker safety, reduce operating costs, and enhance productivity,” The report stated. “Companies are increasingly establishing new product and service hybrids globally in order to disrupt their own markets and generate fresh revenue streams by shifting from selling products to delivering measurable outcomes.”

While North America accounted for the largest industrial IoT market share in 2016, the Asia Pacific region, led by China, is expected to surpass it by the end of 2025.

However, several barriers may limit the growth of industrial IoT, the report stated, including a lack of a defined protocol or standardization, and the use of legacy equipment. Security concerns, especially those associated with big data, are also expected to limit market growth, according to the report.

The 3 big takeaways for TechRepublic readers

1. The global industrial Internet of Things (IoT) market is predicted to rise from $109 billion in 2016 to $934 billion by 2025, according to a new report from Grand View Research.

2. Industrial IoT can reduce costs and increase productivity, process automation, and time-to-market for manufacturers, the report noted.

3. Barriers to industrial IoT adoption include cybersecurity issues, lack of standardization, and legacy equipment, the report found.