There is a lot of focus on the concept of ‘Smart Cities’ at the moment. The capacity for the Internet of Things (IoT) to power just about anything in a community, from the streetlights to the waste bins right through to cars and the roads themselves, and provide data in real time, means that we now have an unprecedented capacity to make our cities run with greater efficiency and productivity than ever before.
However, too often the benefits of Smart Cities are expected to be self-evident. The mistaken belief is that simply building IoT-based applications for a city is sufficient to improve a citizen’s quality of life. In our drive towards hyper-connected and ultra-efficient cities, there’s a real need to make sure that, rather than just building IoT into a community for the sake of it, we’re working towards each component of a smart city having real and measurable citizen outcomes.
Perhaps we should be thinking about these things in terms of building Good Cities, rather than Smart Cities. Environments where the health, security and lifestyle of a citizen is enhanced, not just through the existence of these IoT-enabled technologies, but because the city, and community, is then actively using these technologies as a platform and launching pad towards highly customised solutions that account for the specific needs of the citizens.
So, in addition to rolling out smart lighting, smart bins, environmental sensors, and so on, which monitor the city in real time and provide information to residents or government bodies as needed, the next step towards developing Good Cities is to start embracing principles behind the sharing economy to foster a stronger sense of community and participation by all people in an area.
The potential for government at all levels, and communities, to collaborate better and more efficiently through Good Cities is real. Digitised services can be made more accessible to a wider range of people, and these services can be made more accessible in more ways; reducing the need to travel (or the reliance on a car) can help those with minimal mobility, or those that lack access to a car or public transport.
Car sharing services could be provided to a larger number of people, or health, housing, and community support services can be made more available and on an on-demand basis. More than anything else, the idea of a Good City is that the barriers between the community and government services can be broken down, without introducing additional inefficiency in the ability for government to execute on its services.
The other great focus of Good Cities needs to be an understanding that no one gets left behind. The disadvantaged, minority and niche communities within a city have a real risk of being further marginalised by these technology solutions if they’re rolled out without consideration for language, cultural, economic, or social conditions of the vulnerable.
A successful Good City would take the technology being rolled out, and open it up as a platform that these groups are then able to use to apply to their own communities to improve their quality of life and participation within the broader community.
Smart Cities only work as a concept when technology and the IoT is used with the explicit purpose of improving citizen outcomes; that they’re used to create Good Cities. The potential is that, with the data that is collected through these technologies, and the efficiency that is enabled through them, Smart Cities could be viewed purely as an initiative to improve government efficiency, cut costs, or for corporate to find new revenue streams.
Smart City design used in this way will not generate the maximum outcomes that the technology currently promises. Only by remembering that all of this needs to be done for the benefit of citizens and communities will Smart Cities initiatives find their maximum traction.
Osterhout Design Group recently raised $58 million with a plan to expand its smart glasses and augmented reality footprint from government and enterprise customers to consumers. At the Consumer Electronics Show, ODG made good on its expansion plans as it unveiled its R-8 and R-9 smart glasses.
At CES 2017, ODG highlighted how its new smart glasses are designed to complement existing devices, provide augmented reality experiences and 3D interfaces for gaming. Specifically, ODG introduced the R-8 and R-9, two smart glass devices designed to bridge work and consumer applications. The R-8 will access familiar phone apps on a private screen that floats in your view. There are also augmented reality and virtual reality uses. ODG’s bet is that headworn devices will replace other screens, but for now smart glasses will be worn and removed frequently.
The R-9 has a 50 degree field of view and a 1080p platform. The R-9 is designed for prosumer and light enterprise uses. ODG’s headsets are aimed at bringing entertainment and work uses. The ODG smart glasses land as Lenovo entered the fray earlier on Tuesday.
What’s unclear is how fast the consumer smart glasses market will take off. ODG’s smart glasses historically have revolved around a full computing platform in eyewear. While Google Glass popularized the notion of smart glasses for the enterprise–ODG and Vuzix could send Google a thank you note–computing specs faced some backlash for the masses.
Enter ODG. We caught up with Pete Jameson, chief operating officer of ODG, to talk about how smart glasses and augmented reality is working for businesses. While the consumer move is notable there’s real work being done. Here’s a recap of our chat with Jameson and what you need to know about ODG, which will raise its profile at CES, but has had plenty of traction on its own.
The ODG history. Ralph Osterhout, CEO of ODG, incorporated his namesake company in 1999 almost as a think tank as he focused on other areas. By the late 2000s, ODG was focused on government and “man packable computers,” explained Jameson. ODG worked on smartphones with fingerprints and biometrics and multicore portable servers that could be used by the military in the field. “Headworn displays became a big focus for us in 2010. We wanted manpackable computers in glasses form. A lot of it is classified, but you could capture face comparisons with watch lists and relevant information that would be pulled from a server,” said Jameson. That government focus bridged to the enterprise nicely and business accelerated courtesy of Google Glass.
Enterprise interest. Jameson said that ODG started garnering enterprise interest after its government success. “The enterprise was a natural way to broaden our footprint,” he said. “We got into the enterprise to understand where the opportunity was with B2B. There were applications in healthcare, transportation and logistics. All of those use cases were going on early with our government customers,” said Jameson. The enterprise use cases began to become more visible to ODG in 2013. ODG’s R6 smart glasses were the first entry into the industrial market because it was designed for government use, but has broad applications from the armed forces to agencies. The R7 device, in the market for about a year, was the first product that pushed into the enterprise completely.
The impact of Google Glass. Google Glass was a “boon to the industry,” said Jameson. “Google Glass brought awareness and an introduction to use cases even if Google was aggressive with what it could actually do,” said Jameson. “Google Glass was very helpful to us.” Indeed, the largest Google Glass developer, Advanced Medical Applications, became ODG’s largest partner when it saw the R7, which is built on an Android core. “Google educated the marketplace with awareness and use cases. Google also pulled developers into the market,” said Jameson. Advanced Medical Applications still develops for Google Glass as well as ODG and Vuzix.
Will consumers give smart glasses a shot? Jameson noted that the consumer market has a lot of potential, but is complex. The difference between the enterprise and consumer market is that smart glasses are a work tool that must be worn. “The consumer market is in the early days, but it will still be about what the product does and use cases. Google Glass was positioned as a connected camera display and you walked around with it,” said Jameson. ODG could find a niche in the consumer space as a “heads up computer.” In that respect, smart glasses could be used purposefully and gain adoption like tablets did, he added. “In the early days of the tablet it was used because it was easier and simpler to do tasks you’d do before. It’s early days, but I see that as well for smart glasses,” said Jameson.
Industries for smart glasses. Jameson said healthcare is the “biggest market opportunity we see right now.” Healthcare providers are using smart glasses to interact with patients, perform medical procedures and use telepresence. Transportation including large equipment, assembly and inspection is also a large vertical. Energy, oil and gas and logistics are other key areas. “Engineering and construction is also emerging. The ability to take a 3D CAD model, project those and interact is powerful,” said Jameson. Education is another area with promise.
Returns and leading uses. Jameson said there are roughly a handful of drivers for smart glass adoption for business and there are some overlaps. Here’s the breakdown:
Telepresence and remote expert. Multiple industries need the ability to have connection between a person doing a task and a remote expert. The ROI is clear, says Jameson. If something goes down, you don’t need to put that expert on a plane. Training is also a big use case.
Augmented reality. Overlays such as information, step-by-step instructions, video tutorial and inspections are key for augmented reality uses. Jameson said there is typically industries that combine augmented reality and telepresence.
Heads up displays. In this use case, smart glasses replace monitors. This screen swap allows a doctor to look at a patient instead of a monitor. A driver can operate a vehicle and get new information.
3D visualization. Engineers are increasingly using AR and smart glasses to bring an object up, create a visual representation and then walk around it.
Maturity of implementations. Jameson said that health care has the most implementation traction, but isn’t the most mature market. Jameson said healthcare has the most consistent adoption, but is more about the pilots and first phase implementations (pilots) in real work environments. Healthcare does have some mature applications along with transportation, but most deployments are early. Enterprises are working smart glasses into the overall workflow, developing content and the connectivity on the backend. Jameson noted that there aren’t many third phase use cases where smart glasses are involved with the day-to-day workflow and integrated into processes. Many enterprises are between the pilot and a second phase where smart glasses are clearly a useful tool.
Qualcomm has released the technical details of the latest flagship product in the Snapdragon range, the Snapdragon 835.
The Snapdragon 835’s name was quietly mentioned last year when it came to light that the US chip maker was working with Samsung to produce the new range of chips which power many of our smartphones and tablets today.
Back in November, at the Qualcomm Snapdragon Summit in New York, the company said the new processor, the Snapdragon 835, is based on Samsung’s 10-nanometer (nm) FinFET process technology.
The protege of the Snapdragon 820, the 835 is smaller, more compact and lightweight than its predecessor, which comes in at 14nm FinFET and consumed 25 percent more power.
Qualcomm says the use of this new form factor, complete with over three billion transistors and improved IP blocks, has resulted in a “30 percent increase in area efficiency with a performance boost of up to 27 percent, or 40 percent lower power consumption” — and also gives vendors the option to slot bigger batteries in their products.
According to Travis Lanier, Qualcomm’s senior director of product management, the Snapdragon 825 supports up to 11 hours of mobile device use or media streaming for seven hours, together with 4K video playback. Alternatively, devices powered by the chip will be able to handle on average three hours of continuous 4K video capture or approximately two hours’ worth of virtual reality gaming.
In connection to battery life performance, the company has also launched Quick Charge 4, which Qualcomm claims can give you up to five hours’ mobile device use with only a five-minute charge.
The Snapdragon 835 also comes with a set of upgrades to improve device performance:
Qualcomm Hexagon DSP: Includes Tensorflow and Hallide support;
Qualcomm Kyro 280 CPU: Based on ARM Cortex technology, four cores up to 2.45Ghz, 2mb L2 cache, apparently a 20 percent performance boost;
Qualcomm Adreno visual processing: 25 percent faster graphics rendering with 60 times more display colors;
GPU: More efficient rendering of advanced 3D visuals for DX12, OpenGL ES, and Vulkan applications;
DPU: 10-bit 4K @60fps display, Q-Sync, and a wide color gamut;
VPU: 4K HEVC 10-bit playback, foveated video support.
Virtual reality is a major tech trend which has prompted Qualcomm to focus on improving power performance, graphics and video. According to Kressin, truly “immersive” mobile experiences are “only possible with the right visuals, sounds, and interactions,” and a strong processor is therefore at the heart of matter.
“We are at the cusp of VR being good enough,” Kressin says. “We’re not quite there yet but we are almost there.”
As a result, the Snapdragon 835 has been built for “extreme” pixel quality, low latency, spherical 360-degree view support, high audio quality and precise motion tracking through a VIO subsystem. The company has also moved to support HR10 Ultra HD on the latest Snapdragon chip and supports 4K streaming @60fps.
In addition, the Snapdragon 835 has Q-Sync inbuilt which allows a device’s display to render content at the same fps rate as the GPU, which Qualcomm says results in displays which are “smooth and jank-free.”
With camera use so popular in today’s smartphones and tablets, Qualcomm had designed the Snapdragon 835 to improve how we take photos and quality of images produced. The company says the new processor improves camera zoom, functionality, and focus, and uses the “right core for the right job” through two separate systems depending on whether a user has set the camera to a wide-angle shot or through the telephoto lens.
In addition, the Snapdragon 835 supports EIS 3.0 video stabilization, enhanced roll and shutter correction, provides better autofocus capabilities and includes separate sensors for black & white and color pixels – leading to what the company calls “true to live” color.
When it comes to connectivity, the Snapdragon 835 supports gigabit speeds for 4G networking, and also includes the Snapdragon X16 LTE modem which hosts multi-gigabit channels, added with “5G in mind,” according to Qualcomm.
When it comes to security, according to Kressin, “no matter where we are [..] security is starting to move to the forefront.” As a result, Qualcomm has boosted the Snapdragon processor’s security offerings through Haven, which now includes the Qualcomm Secure Execution environment and App Protect – a system which validates and checks apps and OS integrity.
In addition, the chip maker has equipped the chip with “Secure Camera,” an element which aims to make biometric ID checks more efficient. The feature allows for the inclusion of iris scanning as a biometric identity marker alongside fingerprint checks.
The chip is also USB-PD and USB Type-C compliant.
Many of these improvements, Qualcomm says, has been made possible through machine learning (ML). Kressin says ML is the “common thread” which connects all of the major “pillars” of the processor; for example, graphics, audio and sensor improvements are critical for virtual reality, and algorithms which detect faces can improve the focus of photography.
Last year ZDNet’s Jason Perlow was impressed with the $199 Honor 5X and now we see Huawei bringing the Honor 6X to the US with some important upgrades while keeping the price low at just $249.99.
Huawei announced the Honor 6X in China on 18 October 2016 and today made the US release announcement at CES in Las Vegas. The Honor 6X packs in a lot of smartphone at an affordable price and I’ve spent the last 10 days testing one out.
In 2016, my favorite smartphone hardware maker was Huawei. There’s something about 2.5D glass, beveled metal edges, an extremely fast rear fingerprint scanner, and solid performing camera. The Honor 8 is an incredible device that can be found for just around $300 with near-flagship specifications. The Honor 6X is a bigger phone and competes in the large display market with specs that are near flagship level.
Rather than having a rear beveled edge on the back of the Honor 6X, we see that Huawei took the sides and curved it into the back panel so the phone fits well in your hand. The dual cameras are positioned on top of each other, like the Mate 9, and also have a bump that sticks out the back. The fingerprint sensor is centered below the cameras.
The SIM/microSD card slot tray is on the left with the volume and power buttons on the right. The mono speaker and microUSB port are found on the bottom. An IR port and the 3.5mm headset jack are on top.
Specifications of the Honor 6X include:
Processor: HiSilicon Kirin 655 octa-core
Display: 5.5 inch 1920 x 1080 pixels resolution IPS LCD, 403 ppi
Operating system: Emotion UI 4.1 built on Android 6.0 Marshmallow
RAM: 3 GB
Storage: 32 GB internal storage with microSD storage card
Cameras: 12 megapixel and 2 megapixel dual rear cameras. 8 megapixel front facing camera
Wireless technology: FM radio, NFC, 802.11 b/g/n WiFi, Bluetooth 4.1
Battery: 3,340 mAh battery with fast charging technology
Dimensions: 150.9 x 76.2 x 8.2 mm and 162 grams
The Honor 8 has an USB Type-C port, but the Honor 6X still has a microUSB port on the bottom for charging up the device. You will still find a 3.5mm headset jack, as well as a FM radio.
The camera performs well and may be one of the best I’ve ever used at this price point. I included inside, exterior, outside sunset, and wide aperture photos from the Huawei Mate 9 and Honor 6X in my image gallery below. The Honor 6X will likely satisfy most folks, especially if photos are just shared on social networks.
The Honor 6X runs Emotion UI 4.1 and Honor stated it will be upgraded to version 5.0 in the future. While some people have issues with EMUI, I find it to be a functional UI that adds to Android without being overly complicated or obtrusive.
There is no app drawer in EMUI 4.1, but you can always install something like the Google Now Launcher and be satisfied with the home screen experience. The software performs the same as the Honor 8 so check out that review if you are interested in EMUI 4.1.
One aspect of the dual camera Huawei devices like the P9, Mate 9, and Honor 8 is the advanced camera software. While the iPhone and Google Pixel devices have a very basic camera user interface, Honor brings an advanced experience similar to what we see on Samsung and LG phones. You will find wide aperture mode, professional mode, food mode, perfect selfie, beauty mode, makeup mode for the front camera, panorama, HDR, watermark, audio note, ultra snapshot, best photo, smile snapshot, audio control, timer function, touch to capture, and time-lapse mode.
Honor is a brand that targets millennials and believes that providing slick camera functions is better than a simple automatic interface. I personally enjoy using some of these modes and get more comments from people when I take unique shots using these modes.
I’ve had the Honor 6X in hand for about 10 days and if I wasn’t such a flagship-loving smartphone user I could easily get by with a device like the Honor 6X. People who are looking for great value and solid specifications may want to consider the 6X. It’s nice to have a smartphone that lasts a couple of days and helps you be creative with photography.
In November I wrote about the different ways manufacturers are using dual rear cameras and in this case the Honor 6X has dual cameras for faster focus and wide aperture tricks. The main rear camera is a 12 megapixel shooter, but the second one is only 2 megapixels. It is not a monochrome lens like the Huawei P9 and Mate 9, but is a color one that is used to improve focus times and provide wide aperture effects.
Huawei has some fantastic hardware design language and the Honor 6X continues that trend. It definitely does not feel like a low price Android smartphone and thankfully it also performs better than you would expect for a mid-level device.
People looking for a big screen phone at a very affordable price will appreciate the Honor 6X. It will be available on Amazon, Newegg, ebay, and other online retailers with in-store available in Best Buy, Costco, and others. You will be able to purchase the Honor 6X for $249.99 in gray, gold, and silver.
Intel is acquiring a 15 percent ownership stake in mapping and location service provider HERE. The news comes roughly a week after Chinese technology giant Tencent, Beijing-based mapping company NavInfo, and Singapore state-owned investment firm GIC announced plans to jointly take up a 10 percent ownership share of the mapping company along with current owners Audi, BMW, and Daimler.
As part of the investment, Intel will work with HERE and its consortium of automaker owners to design, test and deliver proof-of-concept architecture supporting real-time, high-definition map updates. The architecture will aim to improve the precision and accuracy of mapping technology used in autonomous driving systems.
For instance, current navigation technology can pinpoint a car’s location to an accuracy within meters, but HD mapping can tighten accuracy down to a matter of centimeters. Ultimately this level of precision will help self-driving vehicles better position themselves on the roadway, which increases safety, reliability and driving functionality.
HERE currently offers this kind of mapping technology as a cloud service, called HD Live Map, so Intel’s contribution will likely fall on the hardware side. Eventually, Intel and HERE plan to offer the architecture to other automakers as fuel for the continued growth of cloud computing and the Internet of Things.
“Cars are rapidly becoming some of the world’s most intelligent, connected devices,” said Intel CEO Brian Krzanich, in a statement. “We look forward to working with HERE and its automotive partners to deliver an important technology foundation for smart and connected cars of the future.”
“A real-time, self-healing and high definition representation of the physical world is critical for autonomous driving, and achieving this will require significantly more powerful and capable in-vehicle compute platforms,” added HERE chief executive Edzard Overbeek. “As a premier silicon provider, Intel can help accelerate HERE’s ambitions in this area by supporting the creation of a universal, always up-to-date digital location platform that spans the vehicle, the cloud and everything else connected.”
Intel did not disclose a purchase price for the stake in HERE but said the transaction is expected to close in the first quarter.
Amazon is trying to push its Alexa voice assistant further into your home in a partnership with China-based electronics giant Tongfang to produce Fire-powered 4k UHD TV sets under the Seiki, Westinghouse and Element Electronics brands.
The 4k TV sets will range from 43- to 65-inches, and of course, the TVs’ remote will feature an integrated microphone so Fire TV’s Alexa digital assistant can pick up your voice. The sets will feature 3GB of RAM and 16GB internal memory for apps.
TV sets will be sold on Amazon and retail store. Pricing and availability wasn’t shared.
“Smart TVs can be cumbersome and difficult to use,” said Sung Choi, vice president of marketing, Tongfang Global. “Our new line of 4K Ultra HD Smart TVs – Amazon Fire TV Edition represent an elevated customer experience powered by the highest performance processors in the industry, a unique voice-controlled remote control, and Amazon’s cinematic viewing experience.”
Amazon’s move isn’t new, as Roku and Google have been working to integrate their software into TV sets for years.
However as ZDNet’s Larry Dignan pointed out earlier on Tuesday, Amazon’s steps to put Alexa everywhere are being embraced by several hardware partners. We are sure to see more Alexa-enabled devices at CES 2017, and it may pay huge dividends to advancing Alexa over Apple’s Siri and Google Home — not to mention Prime sales.
It’s only been nine months since Samsung announced the Galaxy S7 and Galaxy S7 Edge, but there are already plenty of rumors surrounding its follow up: the Galaxy S8.
Based on speculative reports and several credible leaks, here is what we think might appear on Samsung’s 2017 flagship phones, as well as when you can expect them to arrive.
Will there be two Galaxy S8 models?
The Korea Herald claimed Samsung will only offer a curved Galaxy S8 this time around. A leak from Phone Arena also only mentioned a dual-edge curved display device — with no talk of a flat model. Nevertheless, SamMobile seems to think there will be two models, known internally as Dream and Dream2.
South Korean outlet The Bell also claimed Samsung will release a 6.2-inch display model alongside a 5.7-inch display model — and both will be curved. Taking a page from Apple’s playbook, the new phones might be called the Galaxy S8 Plus and Galaxy S8, respectively.
What will the Galaxy S8 look like?
The Wall Street Journal reported that a side-mounted button will be on the new device. Samsung is set to launch Viv, the personal assistant developed by Siri founder Dag Kittlaus. Remember, Samsung acquired Viv in October 2016.
Bloomberg said Samsung will get rid of the home button in favor of an “all-screen front”. The report described the display as wraparound and noted there will be a virtual home button available in the lower half of the glass.
SamMobile said Samsung will include a USB Type-C port for charging and audio and that the 3.5mm headphone jack will be ditched — something Fone Arena also reported. Plus, the Galaxy S8 could offer dual speakers. Samsung acquired Harman in late 2016, so it’s thought the speakers will be Harman-branded.
One leaked set of specifications suggests the Galaxy S8 will come with a 5.2-inch display (4096 x 2160 pixel resolution), making it slightly bigger than the current Galaxy S7. SamMobile said the Galaxy S8 will feature a 2K display, as well.
PocketNow and other sites — based on a Weibo leak — have said the Galaxy S8 might feature a dual rear camera, with a 12-megapixel sensor and a 13-megapixel sensor. But a different leak said the Galaxy S8 will have a 30-megapixel rear camera with optical image stabilization. It’ll also have a 9-megapixel front-facing camera, ET News said.
Ice Universe (via a report by MySmartPrice), a well-known tipster for Samsung, said the Galaxy S8 will come with 8GB of RAM, while another leak said it’ll have 6GB of RAM backed by a 3.2GHz octa-core Qualcomm Snapdragon 830 chip.
A Phone Arena leak pointed to two processors, the Exynos 8895 and the Snapdragon 830, as well as 6GB and 8GB of RAM. This will probably depend on which market you’re in. Another rumor further claimed the Galaxy S8 will have a 4,200mAh battery, fingerprint and retina scanners, and a built-in mini projector.
And finally, expect Samsung’s next flagship to run the latest version of Android overlaid with its TouchWiz software. Sam Mobile has said it’ll have a Bixby voice assistant, powered by Viv (which Samsung has confirmed), as well as an always-visible status bar.
When will Samsung announce the Galaxy S8?
We could see the Samsung Galaxy S8 launch at Mobile World Congress 2017, which kicks off Feb. 27, as that there has been talk about the launch being brought forward in order to recapture consumer interest and confidence after the explosive issues with the Note 7.
Leakster Ricciolo tweeted that Samsung will indeed introduce the phones on Feb. 26 — something Phone Arena also backed up, thanks to a Weibo leakster. However, Samsung could instead launch the Galaxy S8 in New York City on April 8, according to SamMobile, allowing it to avoid the chaos of competing phone announcements at MWC.
Of course, these are all rumors. We suggest you take them with a healthy dose of salt until we get closer to the rumored announcement. Samsung is remaining quiet on specifics, but is providing some hype for its next flagship.
It seems that people drop their phones a lot, and they seem to drop them more often when they’re around the toilet, puddles, or even pints of beer. There are a number of tricks on the web that claim to be able to suck the moisture out of your device, but in my experience they have a less than 50:50 chance of working.
Redux is a lunchbox-sized device that can bring back to life liquid-damaged phones. The device has revived phones damaged by beer, mud, and even soup, and the company boasts a success rate of 84 percent.
Redux works by using a combination of vacuum pressure and heat to revive the wet phone or tablet. The device is then removed and partially charged to be assessed for success. The process can remove all forms of liquid — not just water — and it takes less than an hour.
Pricing for the successful recovery of a device is $50 for a basic phone or a data device and $90 for a smartphone, in addition to a $10 diagnostic fee. If your handset is not recoverable, you’re only down the $10 diagnostic fee.
If you’re particularly clumsy, then Redux offers a $29.99 membership program that comes with two free device recoveries in the price.
In order to give you the best chance of recovering your water-damaged phone, Redux recommend that:
You turn off the device immediately
If possible, remove the battery (some devices like iPhones do not have removable batteries)
Do not charge the device or put it in rice
Take the device to the nearest Redux center ASAP
Nearly 700 Verizon Wireless stores nationwide are now offering the Redux service.
I can’t tell you how many letters I get from readers asking how to break into the mobile app business. Most tell me they have no software experience, little cash, and expect to make a bajillion dollars.
As I’ve written before, lack of experience, skill, and money is not a formula for software success. But as many of you have told me in no uncertain terms, who am I to insist that you can’t dream? What if you’re the one with the blockbuster idea and I, jaded old-school software entrepreneur that I am, just don’t see it?
In this article, I’m going to take you through the steps you need to get an app up on the Android and Apple app stores. I’ll outline tools, resources, and steps you’ll need to take. I’ll even show you some tricks for building your own apps without any programming skill whatsoever.
Whether you make any money is out of my hands. At least you’ll have a starting point. Over the next weeks, I’ll write more about how to really understand the software business. But for those of you who are impatient to get started, here’s what you need to do.
Sign up as a developer
Let’s get started with the basics — getting access to the app stores. In this article, we’ll look at the Google Play store and the iOS App store because they are, by far, the biggest players. Once you complete an app, you’ll need to submit it to the app store and each company will go through a review process designed to determine if your app is up to basic quality standards (and, sadly, those standards are very low), and make sure you’re not embedding malware or other nastiness in the app.
Once accepted in the app store, the two companies will list your apps and you’ll get a percentage of the selling price. When Apple set up the original App Store, they paid 70 percent of the selling price to developers, taking a 30 percent cut for themselves. While Apple’s 30 percent cut may seem like a lot, those who have been in the software business for a while know that’s actually a pretty good deal. For software sold through retail stores, developers might see less than 30 percent of the final sale price. With app stores, developers keep a lot more.
In order to get into the game, you’ll need to sign up for each app store. For iOS, you’ll want to join the iOS Developer Program, which costs $99/year. For Google Play, you will need a Google Account, and then you can go to the Developer Console and pay your $25. Both programs provide some excellent developer resources, but I’d strongly recommend you tap into the Develop and Distribute tabs of developer.android.com for some great guides on both product design and marketing.
Decide what to build
Congrats! You are now a developer. Now you need to build an app. Later in this article, I’ll take you through a number of app development tools that will help you build your first app without any programming background. You’ll want to explore them in depth, because the capabilities of those tools will help you determine what you can and can’t build.
Even so, you have a couple of major choices upfront. Clearly, you’re not going to be building a revolutionary new tool that uses all of the capabilities of smartphones and tablets. You’ll need to learn to code for real to do that. If you’re using a non-programmer’s app building tool, you’re pretty much limited to form and data-based apps, mobilized Web pages, and games.
There is, of course, no guarantee you’ll see any money from any of these. The app market is a hugely competitive market. Even so, I’ll start off by recommending you avoid mobilized Web pages. We are all used to getting our Web page content for free, and a mobile app that just reformats that information is unlikely to generate an app store sale. The way you can make money on mobilized Web pages is contacting companies with their own basic Web pages and offering to turn them into free apps. You won’t get a stream of income from app sales, but you could get a decent services fee for creating such an app for someone else.
Forms-based apps are apps that interact with data entry, databases, and store the data for later retrieval. They’re relatively easy to build and you might be able to build something based on an area of knowledge you have.
Games, of course, are games. Games are the hottest segment of the mobile app market, but concomitantly, it’s also the most crowded segment and the segment where it’s hardest to stand out. That said, building a game is fun just for its own sake, so you might want to give it a try.
Decide how to price it
Next comes pricing. Remember that apps are cheap by comparison to PC and Mac desktop apps. Just about everything is under ten bucks. More to the point, and here’s a big hint, nearly all of the biggest money producers are apps that are free to download and offer in-app purchases.
Frankly, if you want to make money, I’d recommend you start with the in-app purchase business model. Personally, I don’t like in-app purchase — but you can’t deny the success the model has had. After all, buyers can download, try, and get sucked in. If they find value, then they are far more likely to buy your in-app upgrades.
Appery.io: This tool builds a nice integration of data services with apps. It’s a little complex for beginners, but it’s mostly drag and drop. Their free plan allows a maximum of three pages and one user, but that’s really all you need to get started.
Good Barber: Seriously, that’s their name. They have a 30-day free trial. After that, plans start at $16/month. What distinguishes this product is there are some very nice design elements, Google Font integration, and a good selection of icons to choose from, as well as some good YouTube tutorials and webinars.
Appy Pie: Appy Pie is free if you let them run ads in your app. If you upgrade to their $7/mo plan, they won’t run ads, and they’ll help you monetize with iAds and AddMob. They have preset app categories you can choose from like church, restaurant, radio, etc. They also offer a relatively wide range of features you can add to your apps like GPS locations, notifications, and more. This is a good choice if you don’t think people will buy your app, but might enjoy downloading it for free. The monetization with ads can help you offset your costs.
GameSalad: This product has a powerful drag-and-drop game creator, good enough to build an Angry Birds or Flappy Birds-style game. You import graphics and assign behaviors, and build up your games from there. A free version includes ads, but there’s a $299 version that removes the ads and makes in-app purchases available. If you want to make money from games, you need in-app purchases and these folks make that process relatively easy.
Create app graphics and icons
No matter what sort of app you create, you’ll need some app graphics and home screen icons. I personally recommend using Photoshop and Illustrator, but they are both relatively difficult to get started with and moderately expensive. If you want a cheap or free tool, look at Canva.com. This is a nice little online design program that can get you most of the way to your final image.
No matter which platform you build for, you’ll need to upload screenshots to the appropriate app store. Both iOS and Android allow you to press a sequence of keys, and a capture of the screen will be deposited into your camera roll.
On iOS, you’ll want to get just what you want to capture on your screen, then press and hold the Home button. While holding the home button, press the Sleep/Wake button. On your Android device, the screenshot options tend to vary (I know, you’re surprised). For my Galaxy S4, I have to hit the Home button and the Power button at exactly the same time. If I time it right, it works perfectly. Some Android devices have a Take Screenshot option on the Restart screen while others use the volume keys. You’ll need to Google your specific device, but it’s an easy search.
Make an intro video
One of the very best sales tools you can offer is a video of your app. Once you’ve built your app, upload a video to both app stores. Although Google Play has long supported intro videos, iOS has only recently introduced the capability with iOS 8.
Before you submit your app, you’ll need to test the living heck out of it. This is not something you can do yourself. Because you know how your app is supposed to behave, you’re unlikely to find the sequences that send it into a tailspin. Get lots of friends to try it out. Let your mom or grandmother try it out. Give it to your dad. Most apps can’t survive encounter with dads, so that’s always a good way to test. If you can, release early versions to users who may have expressed interest in what you offer and see if they can break it.
It is good to find bugs. Any bug you find before you ship is likely to mean better sales and less returns. So test, test, test.
Submit your app to the app stores
Okay, you’ve reached the big day. Time to upload your apps and by tomorrow, you’re going to be a zillionaire. Well, not exactly. But even so, go back to the developer links I provided at the beginning of this article and submit your apps, good descriptions, icons, video tutorials, and screenshots. If you do everything right, you’ll get a confirmation and you can sit back and wait to see if the app is accepted.
Back in the days when I submitted my 40 silly iPhone apps, the average wait time was 13 days. I’m told it’s substantially less (for most apps), but your mileage is likely to vary. Good luck. The email that says your app is on the app store may be one of the most exciting you receive.
Even so, if you’re just starting out, you can do some marketing. Word of mouth, demos, telling friends, and asking friends to tell friends can get the ball rolling. Use your social networking resources, respond to the app pages online, and always be proud to show off your app.