How to create a bootable VMware ESXi USB drive in Windows

kvsanistock-674109338.jpg

Virtualization is a big player in IT these days, regardless of the sector you’re in. Most businesses can benefit from consolidating servers—and to a greater degree, converging server, storage, and network infrastructures for centralized management and scalability.

With regard to consolidating servers by virtualizing them, the industry standard is VMware, with its extensive software and support offerings for businesses of all sizes. It even has a free offering—ESXi—which is its base hypervisor that can be run on any bare-metal supported server hardware to get IT pros familiar with the product and help organizations on their way to migrating their servers to virtual machines.

While many newer servers have added modern touches to facilitate VMware deployments, such as internal SD card readers for loading the hypervisor onto the SD to maximize all available resources, these servers have also done away with legacy items, like optical drives—and that makes loading VMware onto the servers a bit difficult initially.

But fret not, as USB flash drives (UFDs) have proven to be more than capable at replacing optical media for booting operating systems. And given their flexible read/write nature, even updating installers is a breeze using the very same UFD.

Read on and we’ll cover the steps necessary to create a bootable UFD, with VMware ESXi on it, from your Windows workstation. However, before jumping into this, there are a few requirements:

  • Windows workstation (running XP or later)
  • Rufus
  • VMware ESXi ISO
  • USB flash drive (4GB minimum)
  • Internet access (optional, but recommended)

SEE: VMware vSphere: The smart person’s guide (TechRepublic)

Creating the USB installer

    Start by inserting your UFD into the Windows computer and launching Rufus. Verify that under Device, the UFD is listed (Figure A).

    Figure A

      In the next section, Partition Scheme, select MBR Partition Scheme For BIOS Or UEFI from the dropdown menu (Figure B).

      Figure B

        Skip down to the CD icon and click on it to select the previously downloaded VMware ESXi ISO image (Figure C).

        Figure C

      Finally, click on the Start button to begin the process of formatting and partitioning the UFD and extracting the contents of the ISO to your USB drive. Please note that any data on the drive will be erased (Figure D).

      Figure D

      The transfer process will vary depending on the specifications of your workstation, but typically it should be completed within several minutes. During this process, you may be prompted to update the menu.c32 file, as the one used by the ISO image may be older than the one used by Rufus on the flash drive. If this occurs, click Yes to automatically download the newest compatible version from the internet. Once the process is complete, your USB-based VMware ESXi installation media will be created and ready to boot the hypervisor setup on your server.

      Note: If the USB will not boot on your server, ensure that the USB boot functionality is enabled in the BIOS or UEFI listing. In addition, before the ISO is downloaded from VMware, it is highly recommended that you peruse VMware’s Compatibility Guide on its website, which allows users to verify that their hardware is supported by VMware and for use with its products. If not, perhaps a previous version of ESXi may be a better fit or supplemental drivers may be needed to be included before your specific server will boot properly.

      Edge computing: The smart person’s guide

      Many companies want Internet of Things (IoT) devices to monitor and report on events at remote sites, and this data processing must be done remotely. The term for this remote data collection and analysis is edge computing.

      Edge computing technology is applied to smartphones, tablets, sensor-generated input, robotics, automated machines on manufacturing floors, and distributed analytics servers that are used for “on the spot” computing and analytics.

      Read this smart person’s guide to learn more about edge computing. We’ll update this resource periodically with the latest information about edge computing.

      SEE: All of TechRepublic’s smart person’s guides

      Executive summary

      • What is edge computing? Edge computing refers to generating, collecting, and analyzing data at the site where data generation occurs, and not necessarily at a centralized computing environment such as a data center. It uses digital devices—often placed at different locations—to transmit the data in real time or later to a central data repository.
      • Why does edge computing matter? It is predicted that by 2020 more than five million smart sensors and other IoT devices will be in use around the world, and these devices will generate at least 507.5 zettabytes of data. Edge computing will help organizations handle this volume of data.
      • Who does edge computing affect? IoT and edge computing are used in a broad cross-section of industries, which include hospitals, retailers, and logistics providers. Within these organizations, executives, business leaders, and production managers are some of the people who will rely on and benefit from edge computing.
      • When is edge computing happening? Many companies have already deployed edge computing as part of their IoT strategy. As the numbers of IoT implementations increase, edge computing will likely become more prevalent.
      • How can our company start using edge computing? Companies can install edge computing solutions in-house or subscribe to a cloud provider’s edge computing service.

      SEE: Ultimate Data & Analytics Bundle of Online Courses (TechRepublic Academy)

      bigdataistock-496699834allanswart.jpg



      What is edge computing?

      Edge computing is computing resources (e.g., servers, storage, software, and network connections) that are deployed at the edges of the enterprise. For most organizations, this requires a decentralization of computing resources so some of these resources are moved away from central data centers and directly into remote facilities such as offices, retail outlets, clinics, and factories.

      Some IT professionals might argue that edge computing is not that different from traditional distributed computing, which saw computing power move out of the data center and into business departments and offices several decades ago. Edge computing is different because of the way edge computing is tethered to IoT data that is collected from remote sensors, smartphones, tablets, and machines. This data must be analyzed and reported on in real time so its outcomes are immediately actionable for personnel at the site.

      IT teams in every industry use edge computing to monitor network security and to report on malware and/or viruses. When a breach is detected at the edge, the menaces can be quarantined, thereby preventing a compromise of the entire enterprise network.

      From a business standpoint, here is how various industries use edge computing.

      • Corporate facilities managers use IoT and edge computing to monitor the environmental settings and the security of their buildings.
      • Semiconductor and electronics manufacturers use IoT and edge computing to monitor chip quality throughout the manufacturing process.
      • Grocery chains monitor their cold chains to ensure that perishable food requiring specific humidity and temperature levels during storage and transport are maintained at those levels.
      • Mining companies deploy edge computing with IoT sensors on trucks to track the vehicles as they enter remote areas. These companies also use edge computing to monitor equipment on the trucks in an attempt to prevent goods in transit from being stolen for resale in the black market.

      IoT and edge computing are used in a broad cross-section of industries, which include the following.

      • Logistics providers use a combination of IoT and edge computing in their warehouses and distribution centers to track the movement of goods through the warehouses and in the warehouse yards.
      • Hospitals use edge computing as a localized information collection and reporting platform in their operating rooms.
      • Retailers use edge computing to collect point of sales data at each of their stores, and then they transmit this data later to their central sales and accounting systems.
      • Edge computing that collects data generated at a manufacturing facility could monitor the functioning of equipment on the floor and issue alerts to personnel if a particular piece of equipment shows signs that it is failing.
      • Edge computing, combined with IoT and standard information systems, can inform production supervisors whether all operations are on schedule for the day. Later, all of this data that is being processed and used at the edge can be batched and sent into a central data repository at the corporate data center where it can be used for trend and performance analysis by other business managers and key executives.

      For IT, edge computing is not a slam-dunk proposition—it presents significant challenges, which include:

      • The sensors and other mobile devices deployed at remote sites for edge computing must be properly operated and maintained;
      • Security must be in place to ensure that these remote devices are not compromised or tampered with;
      • Training is often required for IT and for company operators in the business so they know how to work with edge computing and IoT devices;
      • The business processes using IoT and edge computing must be revised frequently; and
      • Since the devices on the edge of the enterprise will be emitting data that is important for decision makers throughout the company, IT must devise a way to find sufficient bandwidth to send all of this data (usually over internet) to the necessary points in the organization.

      Additional resources

      Why does edge computing matter?

      It is projected that by 2020 there will be 5,635 million smart sensors and other IoT devices employed around the world. These smart IoT devices will generate over 507.5 zettabytes (1 zettabyte = 1 trillion gigabytes) of data.

      By 2023, the global IoT market is expected to top $724.2 billion. The accumulation of IoT data, and the need to process it at local collection points, is what’s driving edge computing.

      Businesses will want to use this data—the catch is the data that IoT generates will come from sensors, smartphones, machines, and other smart devices that are located at enterprise edge points that are far from corporate headquarters. This IoT data can’t just be sent into a central processor in the corporate data center as it is generated, because the volume of data that would have to move from all of these edge locations into HQs would overwhelm the bandwidth and service levels that are likely to be available over public internet or even private networks.

      SEE: Internet of Things Policy (Tech Pro Research)

      As organizations move their IT to the “edges” of the organization where the IoT devices are collecting data they are also implementing local edge commuting that can process this data on the spot without having to transport it to the corporate data center. This IoT data is used for operational analytics at remote facilities; the data enables local line managers and technicians to immediately act on the information they are getting.

      Companies need to find ways to utilize IoT that pay off strategically and operationally. The greatest promise that IoT brings is in the operational area, where machine automation and auto alerts can foretell issues with networks, equipment, and infrastructure before they develop into full-blown disasters.

      For instance, a tram operator in a large urban area could ascertain when a section of track will begin to fail and dispatch a maintenance crew to replace that section before it becomes problematic. Then the tram operator could notify customers via their mobile devices about the situation and suggest alternate routes. Great customer service helps boost revenues.

      Additional resources

      Who does edge computing affect?

      Edge computing as a way of managing incoming data from IoT will affect companies of all sizes in virtually every public and private industry sector.

      Projects can be as modest as placing automated security monitoring on your entryways, to monitoring vehicle fleets in motion, controlling robotics during telesurgery procedures, or automating factories and collecting data on the quality of goods being manufactured as they pass through various manufacturing operations half a globe away.

      One driving factor is a focus on IoT by commercial software vendors, which are increasingly providing modules and capabilities in their software that exploit IoT data. Subscribing to these new capabilities doesn’t necessarily mean that a company has to invest in major hardware, software, and networks, since so many of these resources are now available in the cloud and can be scalable from a price point perspective.

      Companies that do not take advantage of the insights and actionability that IoT and edge computing can offer will likely be at a competitive disadvantage in the not so distant future. The tram use case cited earlier in this article is an excellent example: What if you operated a tram system, and you did not have IoT insights into the condition of your tracks or the ability to send messages to customers that advise them of alternate routes? What if your competitor has these capabilities? You would be at a competitive disadvantage.

      Additional resources

      When is edge computing happening?

      Companies in virtually every public and private industry sector are already using IoT technology with an edge computing approach. Tech Pro Research in 2016 revealed that over half of all companies surveyed (which included large enterprises to very small businesses) had either implemented or were planning to implement IoT in the next 12 months; many of these organizations will use edge computing with their IoT strategies. (Tech Pro Research is a sister site of TechRepublic.) Regardless of where a company is with its IoT implementation, edge computing should be on every enterprise IT strategic roadmap.

      Major IT vendors are the primary enablers of edge computing because they will be pushing their corporate customers to adopt it. These vendors are purveying edge solutions that encompass servers, storage, networking, bandwidth, and IoT devices.

      Affordable cloud-based solutions for edge computing will enable companies of all sizes to move computers and storage to the edges of the enterprise. Cloud-based edge computing vendors also know the best ways to deploy edge computing with IoT to optimize results for the business.

      Additional resources

      How can our company start using edge computing?

      Businesses can implement edge computing either on premise as a physical distribution of servers and data collection devices, or through cloud-based solutions. Intel, IBM, Nokia, Motorola, General Electric, Cisco, Microsoft, and many other tech vendors offer solutions that can fit on-premise and cloud-based scenarios. There are also vendors that specialize in the edge computing needs of particular industry verticals and IT applications, such as edge network security, logistics tracking and monitoring, and manufacturing automation. These vendors offer hardware, software, and networks, as well as consulting advice on how to manage and execute an edge computing strategy.

      SEE: Free ebook—Digital transformation: A CXO’s guide (TechRepublic)

      To enable a smooth flow of IoT generated information throughout the enterprise, IT needs to devise a communications architecture that can facilitate the real-time capture and actionability of IoT information at the edges of the enterprise, as well as figure out how to transfer this information from enterprise edges to central computing banks in the corporate data center. Companies want as many people as possible throughout the organization to get the information so they can act on it in strategically and operationally meaningful ways.

      Additional resources

      Worried about ransomware? Here are 3 things IT leaders need to know before the next big outbreak

      Ransomware: It’s a fast-growing form of malware that has the potential to disrupt business in a huge way. Some, like the recent WannaCry outbreak, even have the potential to spread from computer to computer.

      Ransomware’s continued spread may be due to how simple it is to use. After all, why do all the work of harvesting information from a victim when you can just wait for them to send you money?

      In short, it’s dangerous, it’s spreading, and it could disrupt your entire network. If you’re a tech decision maker you need to educate yourself on this growing threat. Eric Ogren, cybersecurity analyst at 451 Research, spoke with TechRepublic about some important things you may not know about Ransomware.

      1. Ransomware operations are sophisticated, but not cutting edge

      It’s not just a couple of people sitting in a cramped room trying to steal your money—ransomware operations can get pretty sophisticated. “Some ransomware organizations even have help lines if you aren’t sure how to use Bitcoin,” Ogren said. “They’re surprisingly sophisticated.”

      SEE: Despite hype, ransomware accounted for only 1% of malicious programs in 2016, according to report (TechRepublic)

      Sophistication doesn’t mean ransomware companies are on the cutting edge of advanced technology, though. When it comes down to it ransomware campaigns, however advanced their encryption methods and worm capabilities, are using old, well-known tricks to proliferate.

      Phishing attacks are the most common methods of spreading ransomware, Ogren said, “because it’s so much easier. Why waste time trying to write scripts to break through security when you can just rely on a person to make a mistake?”

      Once a piece of ransomware is on a system it isn’t doing anything unknown either: Most are exploiting well-known, and likely already patched, flaws. Recent ransomware outbreaks have been perfect examples of this exploit—those who fell prey to WannaCry and Petya were all lacking an essential security patch that Microsoft had released in March 2017.

      2. Most people don’t pay

      Of those hit by ransomware, according to 451 Research (note: study is behind paywall), 81% don’t pay. Instead, they simply reimage the affected machines, and the majority of those restore from backups that minimize data loss.

      Excepting ransomware that worms its way into the BIOS, wiping and reimaging is certainly the best solution for getting rid of ransomware. That’s cold comfort for those who don’t back up their data regularly, though, so make sure you have a solid backup plan in place.

      “It’s not a question of will you get hit by ransomware or other malware,” Ogren said. “It’s a question of when.” He added that businesses should never pay ransoms—all that does is encourage ransomers to keep trying.

      If you’ve heard that companies are stockpiling Bitcoins in anticipation of paying ransoms, you may have considered such a drastic move, but don’t—funnel those funds toward establishing a good backup protocol instead.

      SEE: Gallery: 10 free backup applications to help you prevent disaster (TechRepublic)

      3. There is no silver bullet against ransomware

      “You can do everything right,” Ogren said, “and still end up getting an infection.” Ransomware exploits people in order to spread, and therein lies the problem: Computers can be patched, but a person just needs to see a phishing message that seems like it’s real.

      Minimizing your chances of getting ransomware is the most you can do. That includes:

      • Keeping systems up to date
      • Training users to spot suspicious emails
      • Making sure your IT department or MSP is ready for an attack
      • Backing up computers and essential data
      • Disabling hyperlinks in email so users can’t open phishing messages
      • Blocking attachments from unverified sources

      Search for every possible malware ingress point and shut them all down. It won’t guarantee your safety, but nothing really will. All you can do is minimize your risks.

      Top three takeaways for TechRepublic readers:

      1. Some 81% of ransomware victims just wipe and reimage their machines. Make this your go-to plan when prevention fails, and make it practical by ensuring everything is backed up regularly.
      2. Ransomware operations may be sophisticated, but the software isn’t. Most (including WannaCry and Petya) exploit security holes that are known, and in many cases ones that have already been patched. Those who keep up with updates won’t fall victim.
      3. Ransomware prevention is never a guarantee—there’s always the potential for an infection. All you can do is stay on top of good security practices.

      Travel Experience Incubator Challenge seeking startups with innovative ideas

      jw-marriott-los-cabos-beach-resort-spa.jpg

      Startups with innovative ideas for the travel industry are invited to apply for a three-month incubator program being offered by Marriott, Accenture and 1776.

      The Travel Experience Incubator Challenge was announced earlier this week and applications are being accepted through August 7. As of July 20, there were already 32 applicants, and there will be a total of 5-7 startups chosen for the program. Anyone interested can apply online through the Travel Experience Incubator. Winning startups will participate in the program beginning in September this year.

      One of the reasons for the challenge is because a new Accenture study shows that millennials are twice as likely to switch hotel providers. The travel industry needs to transform in order to provide more personalized experiences to appeal to this market. The incubator is intended to develop solutions to appeal to a broad audience including leisure and business travelers.

      “If we can bring the power of all three of these organizations together and leverage the networks and startups to get innovation and new ideas to solve real business problems for Marriott and travel in general, it’s going to be wonderful,” said Paul Loftus, senior managing director at Accenture Travel.

      SEE: The most useful iOS travel apps for business professionals (TechRepublic)

      There are four areas that the challenge will focus on:

      • Dream and Discovery: Solutions that create or enable immersive experiences, provide new or improved opportunities for travel discovery, research and shopping through online/offline travel agencies, compare corporate managed travel, and improve the reservation, confirmation, and cancellation experiences.
      • Plan and Anticipate: Solutions that provide new opportunities in travel social media/management platforms, planning and logistics, flexible pricing and upselling, group travel, payment and financial services, and transit and luggage services.
      • The Hotel and Local Experiences: Solutions that improve the in-person experience stay, where digital and physical converge, from gamification of experiences to food and beverage, retail and wellness opportunities.
      • Customer Engagement and Advocacy: Solutions that drive loyalty, brand advocacy, or engage customers in repeat stays through a targeted and personalized experience as well as non-stay experiences (i.e. guests from the community who are not staying in the hotel).

      Business travel will be a component of the challenge

      “Business travel is a big chunk of our business, obviously, and I am most certain that these startups that we will work with in the course of this challenge will be particularly geared toward business travel,” said Stephanie Linnartz, global chief commercial officer for Marriott. Marriott operates more than 6,000 hotels across 30 brands worldwide.

      Business travel often provides little time to experience a city. Some of the types of innovative solutions that might appeal to business travelers could be providing the ability to make the most of the spare hours they might have between meetings and adding experiences that will enhance their stay, said Evan Burfield, CEO of 1776.

      “As a very frequent business traveler myself, your world can often seem unbelievably generic. You fly the same airlines, you stay in the same hotels. It’s great you’re getting a consistent experience, but you’re not necessarily getting an experience that’s unique to that city,” Burfield said.

      “When we say travel experience, the word experience is crucial. A rapid increase in business travelers are millennials. They’re bringing that same experience driven lifestyle to business that they bring to their personal lives,” Burfield said.

      Marriott will work with startups to create the sense of connection and sense of place, and allow travelers personalized opportunities to experience the city that they’re in, Burfield said.

      Open to a broad range of companies

      The challenge is open to a broad range of companies around every aspect of travel. “We’re thinking about this as an opportunity to get new, fresh, innovative ideas around the entire travel experience,” Linnartz said.

      “The outcome will hopefully be some really cool new ideas we can implement in our hotels. I’m sure they won’t all be equally raging successes because that’s not the way life works out … but if it’s a really cool idea that we can’t actually implement in all of our hotels, maybe we can do it for one brand, or do a piece of it. I think this whole thing will bring new ideas to the table that none of us can do on our own,” Linnartz said.

      Top three takeaways for TechRepublic readers:

      1. The Travel Experience Incubator Challenge was announced earlier this week and applications are being accepted through August 7.
      2. Marriott, Accenture and 1776 are partnering to offer the challenge, which will culminate in 5-7 startups being selected for a program that will begin in September.
      3. Business travelers will benefit from the results of the challenge, with the possibility of finding new ways for them to better experience the city where they are visiting for business purposes.

      Seattle, not San Francisco, is the fastest growing market for software engineers, report says

      Seattle—not the San Francisco Bay Area—has the fastest growing market for software engineers, according to new research from LinkedIn. The report, published on LinkedIn’s blog, also noted that Seattle offers a median compensation rate of $132,000, the second highest in the country, despite being a cheaper place to live than other big cities.

      For the report, LinkedIn mined its own data on software engineers, looking at skill popularity, geographic spread, and professional networking opportunities. The number of open tech jobs continues to grow, and certain skills stand out in the job market, the report found.

      According to the report, machine learning and data science are the most in-demand skills in tech. There are far more open jobs for those skills than there are professionals to fill them, the report said.

      SEE: How to build a successful data scientist career (free PDF)

      In addition to being sought-after, jobs in data science and machine learning also pay well. Despite the novelty of the field, it has the highest media compensation of any tech job in the US, $129,000, the report said.

      In terms of geography, Seattle may be the fastest growing market, but San Francisco still dominates in terms of compensation, job demand, and available talent. However, Seattle also offers a much lower cost of living than San Francisco. Seattle’s median compensation is also $20,000 higher than New York City, also with a lower cost of living.

      As software engineers leave San Francisco, many are choosing to settle in cities like Los Angeles and Boston, the report said. Los Angeles lays claim to a major diaspora of mobile engineers, while Boston boasts the third-highest median compensation rate at $111,000.

      Professionals in the market for a new job shouldn’t rule out cities like Dallas and Philadelphia, either. The report said that they have low talent supply relative to their job markets, calling them “hidden gem” cities.

      Networking also plays into the job search, as 56% of software engineers already new someone at the organization they moved to before they took the job. Software engineers also regularly look for job opportunities, and update their resumes and profile pages, the report found.

      The 3 big takeaways for TechRepublic readers

      1. Seattle has a faster growing software engineering market than San Francisco, a LinkedIn report found.
      2. Skills in data science and machine learning are the most sought-after among engineers by hiring organizations, the report found.
      3. More than half of all software engineers who take a new job know someone at that company already, according to the report.

      10 time-saving tips to speed your work in Word

      Most users pick up efficiency tips, such as using styles, keyboard shortcuts, and Format Painter, as beginners. What you’ll find, though, is that even experts sometimes do things the hard way. In this article, I’ll share 10 tips for working faster in Word. They’re not new by any means, but a few of them might be new to you.

      SEE: Microsoft Office 365: The smart person’s guide (TechRepublic)

      1: Reduce keystrokes with AutoText

      AutoText entries eliminate the need to manually enter frequently used text and graphics. You reduce your keystrokes and the potential for typos, and you can share these entries with others in your organization. Simply type the text (or insert the graphic) and then format it, if required. Then, select the content and press Alt+F3. Enter a meaningful but short name, as shown in Figure A. By default Word stores the entry in your Normal template.

      Figure A

      word10a.jpg

      Name the AutoText entry.

      To use the AutoText entry, type the entry’s name—ssh—and press F3. Word will replace ssh with the formatted text or graphic—Susan Sales Harkins. To learn more about this time-saving feature, read Seven tips to tap into Word’s AutoText power.

      2: Prevent mistakes with AutoCorrect

      AutoText is great for frequently used and reusable content, but it won’t make you a better typist. We all have words we misspell, and sometimes our fingers just work faster than our brains. AutoCorrect automatically corrects several universally misspelled words, such as switching the for teh. Word fixes it for you as you type, and you don’t have to do a thing.

      Even better, you can add custom AutoCorrect entries. Click the File tab and choose Options in the left pane. Then, select Proofing in the left pane. In the AutoCorrect Options section, click AutoCorrect Options. Enter the content you want replaced in the Replace control and the replacement content in the With control. Figure B shows an entry that replaces maintnance with maintenance. The next time you type maintnance, AutoCorrect will correct the misspelling automatically.

      Figure B

      word10b.jpgword10b.jpg

      Let AutoCorrect fix your typos automatically.

      3: Change your Paste default

      Nothing annoys me more than pasting content from another source and then having to reformat it because the content doesn’t match my document’s formatting. You’ve probably run into it too. If you remember to use the Keep Text Only option from the Paste dropdown, you can avoid the extra reformatting step—if you remember. If this occurs often enough, you need to take control of the situation and change Word’s default settings by clicking the File tab, choosing Options, and then choosing Advanced in the left pane. In the Cut, Copy, And Paste section, choose Keep Text Only from the Pasting From Other Programs dropdown, as shown in Figure C.

      Figure C

      word10c.jpgword10c.jpg

      Change Word’s paste default.

      After setting this option, Word will match your source document’s formatting when pasting content from another source, including the web. This is an application-level setting so it will affect all documents, not just the current one.

      SEE: Microsoft Office Certification Training Bundle (TechRepublic Academy)

      4: Undo styles

      If you can’t change your paste default, there are other ways to remove source formatting when pasting content when you forgot to use Keep Text Only from the Paste dropdown. With the pasted content selected, you can do either of the following to remove its styles, leaving plain text:

      • Press Ctrl+Spacebar
      • Click Normal in the Styles gallery

      5: Take advantage of real-time collaboration

      Office 365 (2013) started the real-time journey with Office Online. Office 365 (2016) takes it up a notch by offering more flexibility. Now the feature offers instant interaction within Word.

      Simply save a Word 2016 document to OneDrive, OneDrive for Business, or SharePoint Online. Click Share (upper-right corner) and enter or choose individuals you want to collaborate with, as shown in Figure D. Click Share when you’re ready to send an invitation email with a link to the document. With that link, invitees can open the shared document and make immediate changes. When you open the shared document, Word will tell you who else is currently working in the document. Invitees will see others as well.

      Figure D

      word10d.jpgword10d.jpg

      Send email links to share documents.

      Using real-time collaboration, you save time spent waiting on email exchanges—you can all work online at the same time. The whole process from beginning to end is efficient and easy to implement.

      If you’d like step-by-step instructions for collaborating in Word 2016, read Word real-time co-authoring—a closer look.

      6: Get info with Tell Me

      This new tool lets you ask a question in plain language and returns its best-guess responses about appropriate tools and features. Sometimes the response is textual information, sometimes it’s a link to a feature. It’s easy, but more important, it’s quick.

      SEE: 20 pro tips to make Windows 10 work the way you want (free PDF)

      7: Gain consistency and design help with templates

      There are tons of free templates available by download. If you can’t find what you need, you can tweak something that’s close, which is quicker than starting from scratch. Every time you start a new project, take a minute to review templates at Office templates & themes.

      8: Use the Word mobile app

      With some Office 365 business subscriptions, you get mobile apps. Using Word mobile app, you can view, edit, and create documents on the go using your phone or tablet. With quick access to files in the cloud, you can share via email or link. Better still, your finished documents offer consistency, whether you’re using Word desktop, the browser version, or your phone or tablet. You get consistent results using familiar tools.

      To get started, use your mobile app’s browser to sign in to your Microsoft.com account. Once you’re in, choose the app and start working. Word mobile has limitations; in this context, efficiency is the freedom to access and manipulate documents wherever you are instead of waiting until you can get to the office. If your organization is supporting devices, you might need help from your administrator because they can control which devices to support.

      9: Make a custom table format your default

      If you frequently apply a custom table format, you’re working harder than necessary. You can make your custom table format Word’s default table format. If the custom table style already exists, right-click it in the Table Styles gallery on the contextual Design tab and choose Set As Default, as shown in Figure E. If the style doesn’t exist, create it first, then make it the default.

      Figure E

      word10e.jpgword10e.jpg

      Make a Table style the default.

      10: Write anywhere

      One of my favorite tips is writing anywhere on the page with a simple double-click. Most users don’t realize how easy it is to get to access a document’s white space. Try it: Open a blank document and double-click anywhere—anywhere at all. Start typing.

      Game companies are forerunners of the next wave of business tech innovation

      albion.png

      Game companies are early adopters. Many innovations common in today’s enterprise and SMB technology environment—the cloud, micro-payments, robust mobile apps, SaaS, and cyber-attack mitigation solutions—were first pioneered by video game developers.

      For Sandbox Interactive, developers of a mobile and PC game released this month called Albion Online, the cloud was instrumental in keeping costs manageable while still developing a AAA product that hosts hundreds of thousands of concurrent users.

      The game deploys a hybrid of mechanics used in traditional multiplayer games like World of Warcraft coupled with loops used in modern mobile free-to-play games. The product is buy-to-play and costs $30 dollars, half the price of products distributed by major publishers. The price, said the studio, filters out resource-sapping trolls but is accessible to most of the market.

      The gaming market is a growing at a rapid clip, driven in large part by the cloud and mobile apps that lower production costs and speed the development process. Newzoo, a site that tracks business technology trends in gaming, reports that gaming was nearly a $100 billion market in 2016 and is expected to grow 20% by 2020. Mobile games and apps enabled by the cloud gobbled up 30% of the total market.

      SEE: Intellectual property: A new challenge in the cloud (Tech Pro Research)

      Albion Online is as much of an economy simulator as it is a game. Hosted in the cloud are hundreds of so-called mega-servers—shards that are seamlessly blended to allow thousands of users to play simultaneously. Nearly every item in the game is created by players, and virtual items are bought and sold at in-game auction houses at prices set by the market. This keeps gamers engaged longer, said Stefan Wiezorek, CEO of Albion’s parent company, Sandbox Interactive.

      TechRepublic spoke with Wiezorek about what enterprise companies and SMBs can learn about tech from game companies.

      Can you explain the emerging games-as-a-service model?

      We do not think games-as-a-service is an entirely new concept. Depending on how exactly you’d define “Games-as-a-Service,” you can see its emergence ever since games went online. For online games, nothing stops a developer from patching, improving, and expanding it post release.

      We think it’s best looked at from two points of view: the product side and the revenue side. These two components are often linked but do not necessarily have to be.

      Even if there are no further revenues tied to this, it will often pay off in the long term for developers who do this. A great example here would be Blizzard, who have kept extensively supporting even those games that are not generating any subscription or in-game revenues. Starcraft:Brood War was released in 1998 and has received tons of patches and balance updates since, the latest one being rolled out in April 2017. Blizzard’s strategy of supporting and improving old titles gives them the benefit of selling box copies of said titles for longer, but more importantly, significantly improves the reputation and trustworthiness of the developer. After all, if I spend money on an online game, I want to know if it will receive ongoing support or if a developer will just take their money and run once overall numbers go down.

      SEE: What startups and enterprise companies should learn from game devs (TechRepublic)

      Now, in most cases, games-as-a-service entails a more direct link between the revenues generated by the players and the services provided by the developer. Ongoing revenues —generated through subscriptions or in-game purchases—allow a developer to substantially improve and expand a game post-release and hence are ideally able to provide the players with a great game experience that never grows old. This concept is even more important for very ambitious and complex games that provide ample opportunities for expansion. When we release Albion Online we will by no means consider the game “complete.” There is a huge list of ideas, concepts, and features that would perfectly fit the game and that we plan to add in regular expansions post-launch. As our business model—games-as-a-service—already provides us with an ongoing revenue stream, we can roll out further expansions free of charge.

      Your game is very similar to a AAA, enterprise-grade product. What can enterprise companies learn from nimble game companies?

      Our main advantage is that we are not part of a large corporate structure and are not reliant on external funding from publishers. This is what actually allowed us to realize Albion Online. The game’s concept is extremely innovative, and larger companies tend to perceive this as a risk rather than an opportunity and hence would not even attempt it. From our point of view, the opposite is true: Rather than trying to compete in the MMORPG space with “just another World of Warcraft clone,” our goal is to fill the Sandbox MMORPG niche, which, as we see it, has been lacking great releases over the past 10 years. This is not for lack of ideas, though. Sandbox MMORPGs are probably the hardest type of game to make; it takes a lot of endurance, innovation, and trial and error. Smaller independent developers often lack the time and funding to make this work. In our case, thanks to having collected $9 million from our founders and additional funds from friends and family, we have managed to grow from a small indie developer to a medium-size one, giving us enough power to take Albion Online across the finish line and beyond.

      How has the cloud changed the business of games?

      Thus far, at least in our case, the most noticeable change is reduced server costs and easier server administration. We feel that a true transformation would happen if the actual graphics processing would happen in the cloud, too, with the client just acting as a streaming and input device. If that happens—and is technically feasible—suddenly system requirements cease to be an issue, creating vast opportunities for better and more immersive games on any device.

      What emerging technologies are most important to your product?

      When we started working on Albion Online in 2012, having decided on the Unity game engine, we made the bold decision to support iOS and Android tablets, despite the fact that hardcore sandbox MMORPGs simply do not exist on these platforms and are considered to be exclusive the domain of PC gamers. What was important for us here was that we would make no compromises on the game just to make it work on tablets, and when we figured that this wouldn’t be the case, we went ahead.

      We feel that mobile gaming, in particular on tablets, is slowly graduating towards more complex games. A ton of PC players also have a tablet and will very often miss “proper” games on that device, even though it’s technically possible. Albion Online will be one of the first games that breaks this wall.

      Our vision with Albion is that tablets essentially become a 2nd device for people who mainly play on PC. After having some challenging PvP combat using your mouse and keyboard, you might want to do some crafting on your sofa while watching a movie. It’s a little bit similar to what Nintendo is currently doing with its Switch console.

      How are game mechanics deployed in companies outside the game industry?

      Using game mechanics outside of games, to influence and direct human behaviour, is called gamification. It has a lot of applications in business, education, sports, and other fields. Ultimately, its goal is to make doing or completing certain tasks more pleasurable, and hence, increase the likelihood and quality of these tasks being done.

      An example from sports would be fitness watches and trackers that become more and more popular. You can track your performance, share it with others, complete certain tasks, and have little contests with other users—all of these elements being a core component of almost all games.

      Another example would be various apps and tools in the education sector, ranging from “brain trainers” to apps that teach you how to code. Most of these again use classical game mechanics such as tracking your score, giving you rewards for completing certain tasks, and telling you which tasks to do next.

      What’s important to understand about gamification is that something does not have to look and feel like an actual computer game to be “gamified.”

      It’s very much a question of degree. The underlying goal is not to turn an activity into a game; this is rather a means to an end. The goal is always to increase motivation, perseverance, fun, and quality of outcome for people doing certain activities and tasks. If somebody found a way to make doing homework as enjoyable as playing console games, they’d probably do more for the quality of education than any school reform could.

      What does the near future—say 18 or 36 months—of game tech look like?

      Something we believe in is the evolution cycle of games on new devices, from simple to more complex. When the first personal computers came about, the first games we saw were very simple titles, such as Pong, Tetris, etc. Then, over the years, games become larger and more complex, ending up with titles such as World of Warcraft or Civilization.

      We saw the same evolution pattern on consoles, in the social games space and in the mobile games space, each device being at a different stage of that evolution right now. High end consoles are now almost at the same level as PC games, social games progressed a lot but were largely overshadowed by mobile games, and mobile games we think will continue on their evolution for years to come, until they’ll be very similar to where consoles are now.

      SEE: Augmented reality gaining more traction than virtual reality in the enterprise (Tech Pro Research)

      While Albion Online is first and foremost a PC game, it also tries to be far ahead of the curve in the mobile space, and we are hoping to be one of the first movers for more complex and deep tablet games.

      From a purely tech point of view, the main thing to watch is of course the development of VR. Some people are skeptical about its long term prospects. We do not share this skepticism; we believe it’s here to stay and to take a massive role in tomorrow’s game world.

      50% of low-skilled jobs will be replaced by AI and automation, report claims

      While artificial intelligence (AI) and automation are poised to shake up the workforce by becoming skilled at performing human tasks, it has not been clear exactly how many—and which—human workers will be affected by the changes. And although AI is expected to master a variety of human tasks—351 scientists just offered a timeline for when human tasks will be completed by machines—the vast majority of US workers still do not fear that their entire job will be replaced by robots, according to the 2017 Randstad Employer Brand Research.

      A new report, however, sheds light on which human workers will be most impacted by advances in automation and AI, by geographic region. Ball State University in Muncie, Indiana, recently released a report from its Center for Business and Economic Research making a bold prediction: Half of low-skilled US jobs are at risk of being replaced by automation.

      The report examined how AI and automation will impact the workforce in America by mapping out two variables: Risk of automation, and offshore job losses. It found a “very strong regional concentration of potential automation and trade job losses facing American communities.”

      According to the report, job losses will not be spread evenly across income— lower-wage, low-skilled workers are most at risk of losing work due to automation. In both cases—losses due to offshoring as well as losses due to AI and automation—rural communities are more at risk, with the report stating that “urban places tend to offer more resilience due to existing forces of agglomeration.”

      It’s clear that AI and automation will force both employers and employees to change the way we think about work. TechRepublic’s Alison DeNisco has also reported on the effects of automation, from a geographical standpoint, looking at how US cities will be most impacted. “Low-wage cities such as Las Vegas, Orlando, and El Paso will be hit the hardest by job automation, according to a recent report from the Institute for Spatial Economic Analysis (ISEA),” DeNisco wrote. She went on to add that job losses are likely to be more drastic than previously predicted, and that the jobs that may take the greatest hits—due to advances in machine learning—are in truck driving, healthcare diagnostics, and education.

      These 10 programming languages have dominated development in 2017

      With the continued growth of interest in software engineering and developer jobs, it seems like everyone wants to know which programming languages are the most useful to learn. The popularity of these languages ebbs and flows with the market, so it’s important that current and would-be developer stay on top of the trends.

      Whether it’s a stalwart legacy language, or a new one that is taking the industry by storm, keeping your skills well-rounded can make you a more attractive job candidate, or potentially earn you extra responsibility at your company. But, you need to choose which languages to invest in wisely.

      To help better understand language popularity, the Institute of Electrical and Electronics Engineers (IEEE) recently released its list of the top programming languages for 2017 on its web publication, IEEE Spectrum. The list is interactive, and can be sorted a variety of ways, but here is how language popularity ranks for the typical IEEE Spectrum reader.

      1. Python

      Python is the No. 1 language of 2017, up two places from its position last year, the list said. It was also the most popularity trending language within job descriptions and among open source hubs.

      SEE: Python Programming Bootcamp (TechRepublic Academy)

      2. C

      C can trace its origin all the way back to the early 1970s, around the same time as Unix. Despite its age, C is still popular in open source software, and for a variety of other uses.

      SEE: C Programming for Beginners (TechRepublic Academy)

      3. Java

      An object-oriented language, Java routinely tops charts as one of the most popular languages in use. The language was birthed in 1995 by James Gosling at Sun Microsystems, which was acquired by Oracle. Lawsuits have been bought by Oracle against Google for the use of Java in the Android OS.

      SEE: Ultimate Java Bundle (TechRepublic Academy)

      4. C++

      C++ debuted in 1983, and has gone on to influence a host of other languages. Typically, large-scale systems designed for commercial purposes make use of C++, including many popular desktop operating systems.

      SEE: C++ for Beginners (TechRepublic Academy)

      5. C#

      Inspired by the sharp musical notation, C# hit the scene as part of Microsoft’s .NET framework. In 2017, C# reentered the top five, the IEEE list noted, reclaiming the spot it lost to R last year.

      SEE: Complete C# Coding Bootcamp (TechRepublic Academy)

      6. R

      Available under the GNU General Public License, R is commonly associated with statistical applications and data analysis. With the strong growth of data science jobs in the enterprise, it’s likely that R will stay popular for a while.

      SEE: The Complete Introduction to R Programming Bundle (TechRepublic Academy)

      7. JavaScript

      Alongside HTML and CSS, JavaScript is one of the foundational tools used to build interactive website elements and some online games. Introduced in 1995, it has grown from only client-side implementations to working server-side as well.

      SEE: Javascript – A Complete Guide (TechRepublic Academy)

      8. PHP

      PHP, which stands for PHP: Hypertext Preprocessor, is geared more toward web development, but can be used for other purposes as well. Version 7.2 is due out by the end of November 2017.

      SEE: Learn Advanced PHP Programming (TechRepublic Academy)

      9. Go

      A relatively young language, Go was launched in 2007 after being built internally at Google. A free and open source project, Go is used in many enterprise tools such as Docker, Dropbox, MongoDB, and more.

      SEE: Google Go Programming for Beginners (TechRepublic Academy)

      10. Swift

      The most recent language on the list, Swift was created by Apple and unveiled in June 2014. Swift is used to program for macOS, iOS, watchOS, and tvOS, but it is open source, so it has seen other implementations as well.

      SEE: Swift 3 Fundamentals & Essential Training (TechRepublic Academy)

      How to use Group Policy to resolve Active Directory account lockouts

      istock-615605212.jpg

      I wrote recently about how to reduce account lockouts and password resets. Even with these tips in place, however, it’s still possible to get entangled in a difficult troubleshooting ordeal trying to figure out why user accounts repeatedly lock. Case in point: if your Active Directory domain policy requires periodic password changes and a user updates their password while actively logged into a system, they may find themselves locked out again and again. This can understandably be a stressful experience for both the user and the system administrator involved.

      This may not be a problem if the user is logged into a couple of systems at once and knows which ones these are. However, if there are numerous systems they might be logged into, a major headache can ensue trying to pinpoint which one is causing the issue. It’s not feasible to just reboot everything in the domain (particularly with shared systems), so short of engaging in complex network traffic analysis what can you do? Simple: use account auditing in Group Policy to locate the troublesome machine and solve the problem.

      In Active Directory environments, users authenticate to computers via their domain credentials. These credentials are transmitted to the domain controllers for validation, so when authentications fail the domain controllers take note of this – if the right setting is enabled. These steps will also work for a single server against which a user keeps failing to log in; you can either edit the Default Domain Policy to enact the same change, or edit the Local Policy (found under Administrative Tools, or you can click Run then enter GPEDIT.MSC to access it).

      Note that the screenshots contained in this article were taken on a Windows 2008 R2 server but these steps will apply to any GUI-based later version of Windows server.

      At your domain controller (or while using an MMC console to administer the domain from your workstation), open the Group Policy Management tool. Expand the Domain Controllers folder. You will see Default Domain Controllers Policy underneath:

      (Note: all screenshots were cleansed to protect confidential data)

      ss1.jpgss1.jpg

      Right-click Default Domain Controllers Policy and select Edit:

      ss2.jpgss2.jpg

      Expand Computer Configuration, Windows Settings, Security Settings, Local Policies then select Audit Policy as shown above.

      ss3.jpgss3.jpg

      Double-click Audit account logon events. Check off Define these policy settings and then check off Failure. Click OK.

      Now the fun starts. Account logon failures will be logged in the Event Viewer. Access this feature and open the Security Log. Look for any events corresponding to Event ID 4771 (you can use the Filter Current Log selection in the right side of the screen to filter all events to only show this particular ID).

      ss4.jpgss4.jpg

      In the above example, I intentionally attempted to log on with the wrong password and found this event at the top of the log. Here we can see the failed authentication attempt came from the IP address 10.1.9.12, as shown in the “Client address” section.

      Once you know the IP address of the problem system, you can have the user log in and disconnect their session, or do so for them while logged in as administrator (go to Task Manager, click the Users tab, then right-click the user’s account name and select Disconnect.) The flurry of lockouts will then cease.

      It’s probably a good idea to leaf through these entries periodically to stay aware of where failed logon attempts are coming from and proactively resolve such issues, such as with expired system accounts or forgotten logon sessions. If you’d prefer automation to handle it for you, Tech Pro Research offers a toolkit to enable event triggers to monitor Windows servers. This option would enable you to receive an email, text message or other type of alert whenever account logons fail, but should be customized carefully lest your INBOX fill up or your phone go berserk from dozens or hundreds of alerts.

      Note: leaving the setting to audit failed account logon events enabled can add a lot of entries to your Security log or consume excessive disk space, so you may want to consider turning this setting back to Not Defined so it is only enabled during active troubleshooting sessions.