How critical thinking, or lack of it, makes or breaks projects

istock-497785784.jpg

The Foundation for Critical Thinking, which strives to effect changes in education and society through the cultivation of critical thinking, defines critical thinking as “the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.”

Recent examples of social media users individuals falling prey to false news have highlighted the need for critical thinking, but the need for critical thinking is anything but new.

SEE:Fake news is everywhere. Should the tech world help stop the spread?

In their book, Academically Adrift: Limited Learning on College Campuses, Professor Richard Arum and Associate Professor Josipa Roksa, reported on a study of undergraduates who were two years into their university studies. The study found that 45% of the students surveyed demonstrated no improvement in their ability to think critically after two years of college.

Hoping to address this need for more critical thinking, at at least one university, the University of Massachusetts at Lowell, now has a course on critical thinking for project managers.

Among the objectives of the UMASS-Lowell project management course are helping project managers learn are how to clarify issues to arrive at correct assumptions for projects, how to clarify unclear project communications while ensuring that all communications going out about a project are always as clear and as accurate as possible, and how to project the future implications of project decisions in a variety of different scenarios while also incorporating insights from individual on the project who might have different points of view.

Critical thinking is a necessity outside of academia as well. Here are some ways it makes a difference in IT projects.

Clarifying issues and arriving at correct project assumptions

When project managers and their staffs are under the gun to make projects happen quickly and to deliver tangible results, critical thinking can quickly fall to the wayside. Instead, IT frantically tries to finish the project and meet timelines through a lot of “heads down” work that bypasses regular contacts with stakeholders, as well as continuous project planning and assessment. This is where factors such as project scope creep, overcommitment of resources, task under-estimates, etc., begin to factor in. These factors are also major drivers of project failures.

Critical thinking can improve communication

Project team members are being hit with demands from problems that evolve during the course of their work—from end users, to technical and system problems, to issues with colleagues, and even their project managers! If the work they’re doing is highly technical and they are under tight deadlines, the work will be done “heads-down.” Right or wrong, they will expect their project manager to do most of the critical thinking and to keep them updated with outside (as well as inside) factors that could impact their work.

I once worked for a project manager who established his reputation as technical programming expert. The manager told me, “I should only have to say something once. There is no time to repeat it, and I expect my staff to understand it the first time.”

But staff doesn’t always understand the first time, especially if the message is abstract or conceptual. Clearly communicating the results of critical thinking is just as important as actually taking the time to do it.

Incorporating ideas from others is another area where critical thinking is an important leadership skill. Many IT’ers come from technical backgrounds and consider themselves as “doers,” not talkers. However, critical thinking in projects means getting the best ideas from project stakeholders and staff, and then plugging these ideas productively into the project. In the process, the project manager must be the orchestrator and critical thinker-in-chief—and he or she must know how to use meetings and brainstorming sessions to the project’s advantage.

Critical thinking produces realistic expectations

Being able to critically think through a project, and then taking the steps to ensure the highest probability of achieving success, are hallmarks of effective project management. But project managers, like their teams, are driven by deadline pressures.

In one case, a manager was asked to deliver a multi-million dollar project to a Wall Street firm by the end of the year. He couldn’t make it, but rather than tell the user, he fudged timelines and made it look like the project was on time. Meanwhile, staff was asked to just get the system done. Quality control work was bypassed. The end project resulted in disaster and produced an angry client.

This was the first project I was asked to take over as a project manager. Going in, I decided to give the client the unvarnished truth about the project—that it would take another six months to complete it. I considered that I might be immediately fired by the client, and also by my own management, but I had done enough critical thinking about the project and what needed to be done to conclude that the project would only end up in the same place six months from now if I didn’t make the communication.

Critical thinking is a requirement for every IT project, which can “make or break” on communications breakdowns with staff members and end users as easily as it make-or-break when tasks are missed or late.

Project environments are fast-paced, and they bring with them very high expectations. These high expectations and compressed timelines place pressures on project managers as well as on project stakeholders and end users—but at the end of the day, everyone engaged will look to the project manager for critical thinking and judgments. No one on the project team is better positioned to make the tough calls.

How machine learning helps the United Nations monitor global events

unite.jpg

The United Nations is a vast organization with a diverse set of technology needs. The organization is an umbrella under which numerous sub-organizations—UNHCR and UNICEF, for example—operate somewhat autonomously. The UN is charged with providing global humanitarian relief, resolving multinational disputes, and tracking global crime. Tech innovation—particularly big data—aids the UN’s mission by making operations more efficient and providing critical operational insight. The UN generates and tracks information on a global scale but has trouble managing large piles of data and is staffed with policy makers who are not inherently tech-savvy.

To grapple with the challenge of big data—in support of the Sustainable Development Goals—the UN partnered with Deepsense.io, a startup that helps non-technical organizations deploy and benefit from data analysis and machine learning. The machine learning firm developed a suite of applications that are accessible to the layperson but provide deep hooks for developers.

Seahorse, the company’s data processing product, is a visual interface built on Apache Spark and helps companies Extract, Transform and Load (ETL) data clusters. “The clean front end helps users to tackle complex real-world data challenges without getting bogged down in [code],” said company spokesperson Kamila Stepniowska. “It’s a powerful tool, but designed for non-technical people and is pretty easy to use.”

SEE: Seven ways to build brand awareness into your digital strategy (Tech Pro Research)

“Machine learning seems particularly effective in two situations: systems that are unpredictable and constantly changing and systems that are so complex that we cannot fully describe them in one model,” said Lambert Hogenhout, Chief Data Analytics, Innovation and Partnerships at United Nations during the organization’s recent TechNovation Brief. “The UN is facing both of these: understanding how the world works is infinitely complex, and it changes all the time. That is why I believe machine learning can be of value to us.”

seahorseeditor.pngseahorseeditor.png

Data sets, local files, and code libraries are presented in Seahorse as visual nodes, then joined with actions and outputs. Though the visual interface uses a common workflow metaphor, users are not limited a predefined set of actions and can write and import their own code in Python and R, Stepniowska said. Spark acts as the machine learning glue that connects, acts on, and outputs data.

The UN also uses Neptune, a machine learning metrics and monitoring platform. Also built on Spark, Neptune inputs raw text files combined with common code libraries from H2O, Keras.io, Lasagne, Scikit-learn, TensorFlow, and Theano. The output is a visual graph that tracks and compares large logs in real time. The platform excels at long-term trend analysis. By combining global news items from a variety of sources with Twitter data, the UN uses Neptune to track the dissemination of propaganda online.

WATCH: Documentary shows information revolution of big data (CBS News)

Stepniowska cited the UN as example of how accessible and powerful machine learning tools help organizations become more nimble and tech-savvy. Like many corporate IT and innovation departments, the UN tech team is based in New York and services nearly 4,000 workers around the world. “The importance of this project goes well beyond the modeling results by providing an opportunity to explore how data science can impact programmatic needs on use cases related to UN mandates,” said Radia Funna, Head of Innovation, Office of Information & Communications Technology at the event. “Presenting these case studies and results… exposes UN staff from different parts of the institution both to machine learning as a powerful tool and to the design thinking that will be useful in implementing this tool.”

Nissan halts joint development of luxury cars with Daimler, report says

PARIS — Nissan is halting joint development of luxury cars with Daimler’s Mercedes-Benz, sources close to the companies told Reuters, suspending a key project in their seven-year partnership and potentially hitting profitability at a new shared factory in Mexico.

Nissan decided in October its premium Infiniti brand would not use “MFA2,” an upgraded Daimler car platform that the companies have jointly funded, in part because Infiniti was not performing well enough to absorb Mercedes’ technology costs, the sources said.

“It wasn’t possible to close a deal on the basis of MFA2,” said one of the people. “The targets set by Infiniti were too difficult to achieve.”

The move could reduce efficiency at a $1 billion shared factory opening this year in Aguascalientes, Mexico, where the companies had planned to use the same compact car architecture to cut complexity and production costs, two of the sources said.

It could also ultimately force Nissan to write down part of a 250 million pound ($306 million) investment at its UK plant that included Mercedes-based tooling, they added.

Daimler and Nissan pursue joint programs only when “beneficial for both sides,” the companies said in separate statements to Reuters, without directly addressing emailed questions about their plans for MFA2 vehicles.

Projects are constantly reviewed against targets to account for “developments beyond the control of management,” they added, and discussions about joint development of future premium compact cars are ongoing.

Nissan’s decision deals a blow to the broad cooperation deal struck between Renault-Nissan boss Carlos Ghosn and his Daimler counterpart Dieter Zetsche in 2010.

It also underscores the mixed results of Nissan’s battle over almost three decades to transform Infiniti into a significant global player in the lucrative luxury car market.

Predates Trump

The decision predates Donald Trump’s election as the next U.S. president, the sources said, and was unrelated to campaign vows to penalize Mexican imports that have rattled the auto industry. Ford on Tuesday scrapped a planned compact car plant in the country.

Nissan and Daimler are pushing ahead with Aguascalientes, where they will build Infiniti and Mercedes models for the U.S. and other markets from a single assembly line opening in 2017.

The project nonetheless faces weakening U.S. demand for smaller cars that contributed to Ford’s cancellation and has further raised profitability hurdles for new Infiniti compacts.

Persistently low oil prices accelerated the market shift to larger vehicles in 2016, Ford sales chief Mark LaNeve said on Wednesday. “All the growth was SUVs and trucks.”

Premium struggle

Infiniti has struggled outside the U.S., last year selling 16,000 vehicles in Western Europe and 230,000 globally — less than 5 percent of Nissan’s overall tally and barely one-tenth of Mercedes’s expected 2 million deliveries.

The first Infiniti appeared in 1989, the same year as the launch model for Toyota’s upscale Lexus brand — which has since grown three times bigger by sales.

Modern carmakers pursue economies of scale by increasing the number of models built on each underlying platform — an adaptable chassis accommodating different body sizes, engines and alternative component sets for every part of the vehicle.

The retreat on luxury compacts leaves intact the sharing of engines between Infiniti and Mercedes, and small cars between Renault and Daimler’s Smart. The three groups also collaborate on vans and pickups.

But joint premium car development for Mexican production was “one of the largest projects between the Renault-Nissan alliance and Daimler,” Ghosn said when unveiling the program in 2014.

A year later, after upgrading its plant in Sunderland, England, Nissan began building the Infiniti Q30 hatchback on the current MFA architecture developed for the Mercedes A-class and derivatives. The plant added the QX30 SUV in 2016, extending Infiniti’s push into smaller vehicles.

Nissan has now ditched plans to use the updated Mercedes platform for successors to those models planned for Aguascalientes, the sources said — or for any future Infinitis. Other cancellations include a compact Mercedes-based Infiniti Q40 sedan earmarked for the plant in 2018.

Instead the single, less efficient assembly line will build Mercedes cars including an A-class sedan and subsequent mini-SUV alongside Infiniti vehicles based on Renault-Nissan architecture, starting with a new QX50 SUV this year.

Pricing power

Nissan was forced to conclude that the Infiniti brand would not command the higher prices required to turn a profit on vehicles stuffed with Mercedes technology, one source explained.

“One of the lessons learned is that if you have the costs of a luxury vehicle but not the pricing, it’s hard to be profitable,” he said.

Nissan may end up writing down some Sunderland investment in Mercedes-based tooling that had been intended to outlast the current Q30 and QX30, people with knowledge of the matter said.

The company is still paying its share of MFA2 development costs running to hundreds of millions of euros for a platform it no longer plans to use, they said, but will leave Daimler with a higher share of some production costs in Aguascalientes.

The setback may also show the limits of Ghosn’s consensual approach to economies of scale as head of both Renault and Nissan, whose 18-year-old alliance is underpinned by significant cross-shareholdings.

The slow pace of integration has contributed to upheaval at the recently created alliance powertrain division, charged with converging Renault and Nissan engineering.

Plans to build Infinitis on Mercedes technology had encountered resistance at Nissan from the start, one source said, adding:

“Once again, Ghosn has been unable to break through the wall of engineers to force commonality.”

CES 2017: This vibrating wristband claims to reduce workplace stress

doppel3.png

The Doppel band claims to control mood via vibrations applied to the wrist.

CES 2017 is awash with newly released wearable gadgets, many of which let users count their steps, track workouts, and monitor sleeping patterns.

But at least one device maker is breaking from the pack with a seemingly no-frills wristband (there are no screens or buttons) aimed at reducing workplace stress.

The Doppel band, made by a London-based startup of the same name, is equipped with a vibrating motor that emits vibrations on the user’s wrist.

According to the company, the vibrations trigger the body’s natural response to rhythm, helping users stay alert or calm down based on the speed of the vibration. Users regulate the rhythm of the vibrations by tapping on the device or via smartphone app.

“With doppel, you use your own body to help rev up for that really long meeting, stay calm during an important presentation or wind down at the end of the day,” the company claims in a press release.

To the skeptic, the Doppel band stinks of snake oil. But the company says the device has been independently tested by the Royal Holloway University London and shown to double a user’s focus and alertness as well as significantly reduce stress.

The Doppel wristband will be available in the US later this spring for $179.

Cadillac offers ‘vehicle subscription’ service for $1,500 a month in NYC

Cadillac will launch a new “vehicle subscription” service next month that allows customers access to its vehicles for a flat monthly rate, no financing or leasing required.

The Book by Cadillac service will launch Feb. 1 for drivers in the New York metropolitan area, spokesman Eneuri Acosta said today.

Members will be able to request various Cadillac models through a smartphone app for a flat rate of $1,500 per month, including registration, insurance and taxes. The requested Cadillac model will then be delivered to the user’s requested location via concierge and can be exchanged at the user’s convenience, according to a news release.

The program could give Cadillac a larger presence among drivers in major metropolitan areas such as New York, where the brand recently relocated. Acosta said Book provides those drivers in urban markets drivers with flexibility, a key in cities where parking is limited and public transportation is robust.

For instance, drivers could request an Escalade SUV to keep on hand during the summer months for road trips before exchanging it for a smaller car for city travel the rest of the year.

“We think there’s a white space between the leading and financing models of the world and new sharing models,” Acosta said.

As residents in densely populated cities increasingly turn to car-sharing services, luxury brands such as Cadillac have been experimenting with new vehicle usage models.

For instance, Audi launched a pilot program in Berlin in late 2014 that allows customers access to several models that could be rotated over a 12-month period. It also launched a San Francisco pilot program called Audi On Demand that drops off and picks up rental vehicles at locations pre-set by the user.

Acosta said Cadillac, which introduced a pilot program for Book last year, found that users are mostly interested in using Book as a secondary mode of transportation, with a vehicle they own or public transit remaining their primary way to get around.

“Book by Cadillac is an innovative new option targeted at a growing class of luxury drivers searching for access to various cars over time, dependent on their individual needs, coupled with a hassle-free white-glove exchange,” Uwe Ellinghaus, Cadillac chief marketing officer, said in a statement today.

Book users will have access to Platinum-level XT5 midsize crossovers, CT6 sedans, Escalades and V-series performance cars. The number of vehicles available for use will be “limited at launch” but will expand over time, Acosta said.

Cadillac said it plans to expand the Book service beyond the New York market in the future. Acosta declined to discuss which markets the luxury brand is examining or how soon it could expand.

Cadillac sales

Separately, Cadillac said its global sales gained 11 percent in 2016 from a year earlier to 308,692 units, the brand’s best annual performance since 1986.

The gain was driven in large part by a 46 percent rise in China sales to 116,406 units. Sales in each of the brand’s three other major markets — the U.S., Canada and the Middle East — declined, including a 3-percent drop in American deliveries to 170,006 units.

“It was a stunning year for Cadillac’s global growth in 2016,” Cadillac President Johan de Nysschen said in a statement today. “Drawing more customers than any year in the past 30 is an excellent springboard for the robust product offensive from Cadillac in the coming years.”

CES 2017: ASUS brings AR to its smartphones with Google Tango-powered ZenFone

asuszenfones.jpg

The second Google Tango-powered smartphone has officially been unveiled as the ASUS ZenFone AR, which the company debuted alongside the ZenFone 3 Zoom at the 2017 Consumer Electronics Show (CES) in Las Vegas on Wednesday.

The ZenFone AR is the first smartphone that supports both the Tango augmented reality (AR) technology and Google’s Daydream virtual reality (VR) technology, according to a press release. The first phone to support Tango itself was the Lenovo Phab 2 Pro, announced in late 2016.

For those unfamiliar, Tango helps to map a user’s position in their physical environment, so that developers can build AR experiences relative to that particular environment, the release said. And Daydream supports the use of VR apps with the Google Daydream headset.

SEE: CES 2017: New ASUS Chromebook has business style, touchscreen, and Android apps

To qualify for use with Google’s Tango, a device must have specific sensors and software that allow it to interpret the world around it. “Tango adds three new abilities to ZenFone AR: motion tracking, depth perception and area learning. Through these capabilities, Zenfone AR can detect how far it is away from a floor, wall or an object and understand where it is moving in three-dimensional space,” the release said.

The ZenFone AR uses a Snapdragon 821 processor, and has a high-resolution 5.7-inch screen. It’s screen makes it a good option for use within the Daydream VR headset, as it adds to the immersiveness of VR apps, the release said.

ASUS also announced the ZenFone 3 Zoom, a smartphone designed for photography with long battery life. The 5.5-inch smartphone has a 5000mAh packed into a 7.9mm chassis, making it the world’s thinnest phone to use such a battery, the release said.

Much like the iPhone 7, the ZenFone 3 Zoom utilizes a dual camera setup. According to the release it has a 12MP, f/1.7-aperture, 25mm wide-angle main lens, along with a dedicated 12MP, 56mm camera for instant 2.3X optical zoom. It also includes a portrait mode for depth-of-field shots with blurred backgrounds. The phone supports RAW files and can record 4K video.

In terms of battery life, the ZenFone 3 Zoom claims 40 days of standby power on 4G, and it also can be used as a power bank to charge other devices, the release said.

The ASUS ZenFone AR will be available in Q2 2017, while the ASUS ZenFone 3 Zoom will be available in February.

The 3 big takeaways for TechRepublic readers

  1. At CES 2017, ASUS announced the new ZenFone AR, which offers support for both Google’s Tango AR and Daydream VR technology.
  2. The ZenFone AR is the second phone to offer support for Tango, providing 3D mapping capabilities for use in AR apps.
  3. The ZenFone 3 Zoom is geared toward photography with a dual camera setup and support for RAW file format.

Ford wants to turn your car into a Wi-Fi hotspot

samsunggear8016.jpg

Lost your car? Soon your smartwatch will be able to help you to find it.

Image: Ford

Motor giant Ford is adding a built-in Wi-Fi hotspot to some of its models, beginning later this year.

The in-car Wi-Fi hotspot will use AT&T’s 4G LTE network for connectivity and can support up to 10 devices at a time. When the vehicle is parked, the Wi-Fi is accessible from up to 50 feet away because of an external antenna, which improves the signal strength.

SYNC Connect Wi-Fi hotspot users can monitor data usage, signal strength and connected devices, plus block certain devices and change settings on the SYNC 3 touch screen. They can also use FordPass to view Wi-Fi data usage and link to AT&T’s account management portal.

New car buyers of vehicles with the Wi-Fi hotspot — which will go on sale in the autumn — will get a trial subscription of three months or 3GB, whichever comes first, after which users will need an AT&T data plan.

Ford also noted: “Don’t drive while distracted. Use voice-operated systems when possible, and don’t use handheld devices while driving.”

Companies like Ford are trying to take advantage of the increasing computing power embedded into vehicles to create new revenue streams, in a bid to diversify their business and move away from reliance on simply selling new vehicles.

The new service was announced at the CES show, where Ford also said its cars will be able to send messages to wearers of Samsung Gear S2 and S3 smartwatches via the new Gear Auto Link app, due in the spring.

After parking, drivers will receive a prompt on their smartwatch asking if they want to log the parking spot, which is picked up from the vehicle’s GPS. For vehicles inside a parking garage, the driver can type the level, column and other location indicators into the watch, so that on their return to the vehicle, drivers can receive directions to navigate back to the parking spot using the watch.

Drivers can also use their smartwatch to stay alert by using the Gear to set chimes and voice alerts at three-, five-, 10-, 15- or 20-minute intervals while on the road. Ford said future versions of the app will vibrate the watch as an added way to help keep the driver alert.

The company also announced that it is building Amazon’s Alexa voice-controlled digital assistant into some car models.

More on in-car tech

The 2017 ultimate guide to Gmail backup

This article was originally published in July 2015. It has been updated for 2017.

A few years ago, I moved off of Office 365 and Outlook and onto Gmail. Many of you thought I’d regret the move, but I have to tell you that Gmail has been a nearly frictionless experience. I don’t think I’d ever go back to using a standalone email application. In fact, I’m moving as many applications as I can to the cloud, just because of the seamless benefits that provides.

Many of you also asked the one question that did have me a bit bothered: how to do backups of a Gmail account? While Google has a strong track record of managing data, the fact remains that accounts could be hacked, and the possibility does exist that someone could get locked out of a Gmail account.

Many of us have years of mission-critical business and personal history in our Gmail archives, and it’s a good idea to have a plan for making regular backups. In this article (and its accompanying gallery), I will discuss a number of excellent approaches for backing up your Gmail data.

By the way, I’m distinguishing Gmail from G Suite, because there are a wide range of G Suite solutions. Even though Gmail is the consumer offering, so many of us use Gmail as our hub for all things, that it makes sense to discuss Gmail on its own merits.

Overall, there are three main approaches: on-the-fly forwarding, download-and-archive, and periodic or one-time backup snapshots. I’ll discuss each approach in turn.

On-the-fly forwarding

Perhaps the easiest method of backup, if less secure or complete than the others, is the on-the-fly forwarding approach. The idea here is that every message that comes into Gmail is then forwarded or processed in some way, ensuring its availability as an archive.

Before discussing the details about how this works, let’s cover some of the disadvantages. First, unless you start doing this as soon as you begin your Gmail usage, you will not have a complete backup. You’ll only have a backup of flow going forward.

Second, while incoming mail can be preserved in another storage mechanism, none of your outgoing email messages will be archived. Gmail doesn’t have an “on send” filter.

Finally, there are many security issues involve with sending email messages to other sources, often in open and unencrypted text format.

Those considerations aside, it’s a way to go.

Gmail forwarding filter: the very easiest of these mechanisms is to set up a filter in Gmail. Set it to forward all you email to another email account on some other service. There you go. Done.

G Suite forwarding: One easy way I grab all incoming mail to my corporate domain is using a G Suite account. My company-related email comes into the G Suite account, a filter is applied, and that email is sent on its way to my main Gmail account.

This provides two benefits. First, I keep a copy in a second Google account and, for $8.33/mo, I get pretty good support from Google. The disadvantage of this, speaking personally, is only one of my many email addresses is archived using this method, and no mail I send is stored.

SMTP server forwarding rules: For the longest time, I used Exchange and Outlook as my email environment and Gmail as by incoming mail backup. My domain was set to an SMTP server running at my hosting company, and I had a server-side rule that sent every email message both to Exchange and to Gmail.

You can reverse this. You could also send mail for a private domain to an SMTP server, but use another service (whether Office 365 or something free, like Outlook.com) as a backup destination.

Forward to Evernote:Each Evernote account comes with a special email address that you can use to mail things directly into your Evernote archive. This is a variation on the Gmail forwarding filter, in that you’d still use Gmail to forward everything, but this time to the Evernote-provided email address. Boom! Incoming mail stored in Evernote.

IFTTT to Dropbox (or Google Drive or OneNote, etc): While this approach isn’t strictly forwarding, it’s another on-the-fly approach that provides a backup as your mail comes in. There are a bunch of great rules that link Gmail to storage services like Dropbox, and you can use IFTTT.com to backup all your messages or just incoming attachments to services like Dropbox.

In each of these cases, you’re essentially moving one cloud email store to another email store, so if you want something that you can physically control, let’s go on to the next strategy.

Download-and-archive

The download and archive group covers methods that get your message store (and all your messages) from the cloud down to a local machine. This means that even if you lost your internet connection, lost your Gmail account, or your online accounts got hacked, you’d have a safe archive on your local machine (and, perhaps, even backed up to local, offline media).

Local email client software: Perhaps the most tried-and-true approach for this is using a local email client program. You can run anything from Thunderbird to Outlook to Apple Mail to a wide range of traditional, old-school PC-based email clients.

All you need to do is set up Gmail to allow for IMAP (Settings -> Forwarding and POP/IMAP -> Enable IMAP) and then set up an email client to connect to Gmail via IMAP. You want to use IMAP instead of POP3 because IMAP will leave the messages on the server (in your Gmail archive), where POP3 will suck them all down, removing them from the cloud.

You’ll also need to go into your Label settings. There, you’ll find a list of your labels, and on the right-hand side is a “Show in IMAP” setting. You must make sure this is checked so the IMAP client can see the email stored in what it will think are folders. Yes, you might get some message duplication, but it’s a backup, so who cares, right?

Just be sure you check your client configuration. Some of them have obscure settings that limit just how much of your server-based mail it will download.

The only real downside of this approach is you need to leave a user-based application running all the time to grab the email. But if you have a spare PC somewhere or don’t mind having an extra app running on your desktop, it’s a versatile, reliable, easy win.

Gmvailt:Gmvault is a slick set of Python scripts that will run on Windows, Mac, and Linux and provides a wide range of capabilities, including backing up your entire Gmail archive and easily allowing you to move all that email to another Gmail account. Yep, this is a workable solution for easily moving mail between accounts.

What’s nice about Gmvault is that it’s a command-line script, so you can easily schedule it and just let it run without too much overhead. You can also use it on one machine to backup a number of accounts. Finally, it stores in multiple formats, including standard ones like .mbx that can be managed in traditional email clients like Thunderbird. Oh, and it’s open source and free.

Upsafe: Another free tool is Upsafe. Upsafe is Windows-only, but it’s stone-cold simple. All you do is install the program, connect it to your Gmail, and download. It will do incremental downloads and even let you browse your downloaded email and attachments from within the app.

Upsafe isn’t nearly as versatile as Gmvault, but it’s quick and painless.

The company also offers a cloud backup solution, which listed as free, but also comes with a premium backup solution which increases storage beyond 3GB and allows you to select whether your data is stored in the US or EU.

Mailstore Home: Yet another free tool is Mailstore Home. Like Upsafe, Mailstore is Windows-only. What I like about Mailstore is that it has business and service-provider bigger brothers, so if you want a backup solution that goes beyond backing up individual Gmail accounts, this might work well for you. It also can backup Exchange, Office 365, and various IMAP-based email servers.

MailArchiver X: Next, we come to MailArchiver X, a $34.95 OS X-based solution. Even though this solution isn’t free, it’s got a few interesting things going for it. First, it doesn’t just archive Gmail data, it also archives local email clients as well.

Somewhere on a backup disk, I have a pile of old Eudora email archives, and this could read them in and back them up. Of course, if I haven’t needed those messages since 2002, it’s not likely I’ll need them anytime soon. But, hey, you can.

More to the point, MailArchiver X can store your email in a variety of formats, including PDF and inside a FileMaker database. These two options are huge for things like discovery proceedings.

If you ever need to be able to do really comprehensive email analysis, and then deliver email to clients or a court, having a FileMaker database of your messages could be a win. It’s been updated to be Sierra-compatible. Just make sure you get version 4.0 or greater.

Backupify: Finally for this category, I’m mentioning Backupify, even though it doesn’t really fit our topic. That’s because many of you have suggested it. Back in the day, Backupify offered a free service backing up online services ranging from Gmail to (apparently) Facebook. They have since changed their model and have moved decidedly up-market into the G Suite and Salesforce world and no longer offer a Gmail solution.

One-time backup snapshots

Our final category of solution are one-time backup snapshots. Rather than generating regular, incremental, updated backups, these approaches are good if you just want to get your mail out of Gmail, either to move to another platform or to have a snapshot in time of what you had in your account.

Google Takeout: The simplest of the backup snapshot offerings is the one provided by Google: Google Takeout. From your Google settings, you can export just about all of your Google data, across all your Google applications. Google Takeout dumps the data either into your Google Drive or lets you download a pile of ZIP files. It’s easy, comprehensive, and free.

YippieMove: I’ve used YippieMove twice, first when I moved from a third party Exchange hosting provide to Office 365, and then when I moved from Office 365 to Gmail. It’s worked well both times.

The company, disappointingly known as Wireload rather than, say, something out of a classic Bruce Willis Die Hard movie, charges $15 per account being moved. I found the fee to be well worth it, given their helpful support team and my need to make a bit of a pain out of myself until I knew every email message had made the trip successfully.

Backup via migration to Outlook.com: At roughly the time I was moving from Office 365 to Gmail, Ed Bott moved from Gmail to Outlook. He used some of Outlook’s helpful migration tools to make the jump.

From a Gmail backup perspective, you might not necessarily want to do a permanent migration. Even so, these tools can give you a great way to get a snapshot backup using a completely different cloud-based infrastructure for archival storage.

Partial, recent messages only

There is one more approach you can use which is technically not forwarding and is somewhat more limited than the other on-the-fly approaches, but it works if you want to just grab a quick portion of your recent email, for example if you’re going on vacation or a trip. I’m putting it in this section because it didn’t really fit anywhere better.

That’s Gmail Offline, based on a Chrome browser plugin. As its name implies, Gmail Offline lets you work with your recent (about a month) email without having an active internet connection. It’s certainly not a complete backup, but might prove useful for those occasional when you just want quick, offline access to recent messages — both incoming and outgoing.

Recommendations

One of the reasons I do large “survey” articles like this is that each individual and company’s needs are different, and so each of these solutions might suit you better.

Here at Camp David, we use a combination of techniques. First, I have a number of email accounts that forward to my main Gmail account, so each of them keeps a backup in addition to my primary Gmail account.

Then, I use Gmvault running as a scheduled command-line process to download regular updates of both my Gmail archive and my wife’s. Those downloads are then archived to my RAID Drobos, a second tower backup disk array, and back to the cloud using Crashplan.

While individual messages may be a royal pain to dig up if needed, I have at least five copies of almost each one, across a wide range of mediums, including one (and sometimes two) that are usually air-gapped from the Internet.

Yeah, I get too much email. But hey, it’s a living.

You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

CES 2017: LG unveils digital assistant Hub Robot, sets to compete with Amazon Echo and Google Home

lg-hub-robot-01-1.jpg

LG has thrown its hat into the digital assistant ring. Hub Robot, the company’s competitor to Amazon Echo and Google Home, was announced at the 2017 Consumer Electronics Show (CES) in Las Vegas on Wednesday.

Hub Robot “takes the concept of the smart home to the next level,” according to a press release. The small, white, anthropomorphic assistant uses Amazon Alexa’s voice recognition technology to connect with IoT devices in a user’s home or office.

While Hub Robot performs many of the same tasks as the Echo and Google Home, such as playing music, setting alarms, and offering weather alerts, LG said the difference lies in its ability to complete household tasks. The assistant can connect to LG appliances, allowing a user to turn on the air conditioning, preheat the oven, or change a dryer cycle with a voice command, according to the press release. The robot’s display can also show images of contents inside your refrigerator, and display recipes with audio instructions.

However, the assistant can only perform these tasks if a user already owns or decides to purchase LG appliances. But, given that Hub Robot runs on Amazon’s Alexa technology, it’s likely that LG could partner with other companies in the future.

SEE: Will LG’s new Gram laptops live up to their 24 hour battery claims?

Hub Robot is more human-like in appearance than the Echo or Google Home. It has two glowing blue eyes, and is designed to express a range of emotions by displaying different faces.

“The Hub Robot is designed to respond to consumers using body language, such as nodding its head when answering simple questions, and is always aware of activities inside the home, such as when family members leave, come home and go to bed,” the press release stated.

The device also features a camera that can distinguish between different people’s faces, and can be programmed to greet individuals in different ways—which neither the Echo nor Google Home can currently do.

But, Hub Robot will face stiff competition from those current industry leaders, which both announced new partnerships and use cases at CES 2017 as well.

Alexa will soon be found in Whirlpool appliances, Lenovo smart devices, Ford cars, and ADT security systems. The Alexa skill store now lists over 7,000 skills—up from 6,000 just three weeks ago. Clearly, companies see the benefit of integrating with Alexa to reach customers at home and in the office.

Meanwhile, Google announced partnerships with Daimler and Hyundai to integrate Google Assistant into vehicles, and also said that Google Assistant will come to Android TV this year. The company unveiled its Actions on Google developer platform in December.

The 3 big takeaways for TechRepublic readers

  1. At CES 2017, LG announced Hub Robot, a digital assistant built with Amazon Alexa voice recognition technology to connect with IoT devices in a user’s home or office.
  2. Hub Robot differs from competitors Amazon Echo and Google Home in that it can recognize different users, and connect to LG appliances.
  3. However, with both Alexa and Google Assistant rapidly expanding with new partnerships, it may be difficult for LG to compete unless it makes similar relationships with other manufacturers.

Software is eating our managers, but that’s okay, we have containers

We often don’t tend to think of software development as a trendsetter for the rest of the business, but that’s exactly what’s been happening over the past few years. For the year ahead, expect the software world to continue to set the pace for the rest of the world.

it-worker-by-michael-krigsman.jpg
Photo: Michael Krigsman

Omed Habib, director of product marketing at AppDynamics, recently published some compelling thoughts on why the rest of the world is following the lead of the software world, and how we’ll be seeing evidence of this in the year ahead.

Human teams become their own “microservices”: The microservices model “applies to more than just software,” Habib points out. Lately, software teams “are acting more like independent business units.” He illustrates how the microservices model has reshaped management practices within some of the big thought-leading companies such as Google and Amazon:

“Individual and autonomous application teams are organized around specific business objectives. At Google, these application teams include a crucial new role: site reliability engineers (SREs) who combine development and operations skills. As Google’s Ben Treynor defined it, ‘The SRE is fundamentally doing work that has historically been done by an operations team, but using engineers with software expertise, and banking on the fact that these engineers are inherently both predisposed to, and have the ability to, substitute automation for human labor.'”

This year, expect to see such loosely coupled, autonomous teams to be seen more in industries beyond software, Habib predicts. “You will see more work teams that include their own developers, deployment models, performance engineers, business analysts and product management teams. Like miniature companies within a company, they will operate as autonomous groups responsible for innovation, execution, deployment, application performance monitoring, and business performance monitoring.”

Speaking of microservices…. Agile methodologies will gain even more credence over the coming months and years, fueled by three technology movements — microservices, containers, and DevOps. The flexibility and freedom from underlying systems and calcified processes these new ways of deploying software offer will boost the Agile principles of “interactions over processes, minimal viable products and responses over planning,” says Habib. “Enterprise software is now a whirling mass of microservices, APIs, and containers in constant communication with each other through the hybrid cloud.”

More “crowdsourcing”of development work: Almost contradictory to the trend toward autonomous software-driven business units is the crowdsourcing of projects — which may mean more entrepreneurial opportunities, but built on piecemeal work. “The manager still sets expectations and manages routines, but now the coder’s primary transaction is with automation,” Habib relates. “They submit code and move on to the next assignment. Managers many not even know the people (or bots) who submitted the code.” He notes how one 100% crowdsourced enterprise, Elastic.co, creator of Elasticsearch, “has built up enough contributors to challenge Splunk for the log analysis market.”

The lessons of application performance are increasingly being applied to business performance. Habib says the lessons learned from application performance management can be applied to all processes across the business. (By the way, this is AppDynamics’ business, so he sees this firsthand.) There’s now impetus to apply the same digital performance metrics to business on a wider scale. “Real-time insights into the customer experience can auto-correlate the relationship between specific performance data and business goals,” says Habib.

The bottom line is that every business is becoming a software business, so people who know software need to step up and lead. Habib quotes AppDynamics CEO David Wadhwani in this regard: “Accelerate your careers, redefine your goals. Don’t think of yourselves as IT professionals, think of yourselves as business owners who happen to run the technology as well.”