Tech trends to look out for in 2016

city-of-blinding-lights-1196157-1279x852

This time last year I was looking forward to what mobile would offer us in 2015. This year I’m casting the net further afield, but inevitably mobile still features to one extent or another, whichever technological trend you look at. After all, for many ‘the mobile internet’ is now simply ‘the internet’; the distinction has become immaterial as mobile becomes their main touch point for information and online interactions.

Here’s what I expect to be the main trends, concerns and areas of attention over the course of 2016.

Web design

There’s a lot of development around web design and UI patterns right now, fuelled by more businesses recognising the need for their sites to be mobile/multi platform ready. As mobile traffic starts to overtake desktop traffic, and continuity and consistency of customer experience across multiple touch points becomes increasingly relevant, so too does the adoption of approaches and templates that makes it easier to achieve.

The easier you make it for a customer to recognise regular functionality, such as registering an account or going to the checkout, the better – we’re now at a point with commonly used features where it’s no longer necessary to reinvent the wheel just because you can. If there’s a tried and tested model of usage that is instantly understandable by the user, then in most cases it’s going to be better to adopt that than force a person to work through something new.

This link goes through the common UI design patterns you can expect to see in 2016. In fact you’ve probably seen a few already.

Google AMP

In October last year, Google announced its Accelerated Mobile Pages (AMP) project. The objective was to development a set of technical specifications that publishers could adopt in order to make their websites and online content load faster when accessed by mobile browsers.

Following successful testing, Google are set to roll it out February this year and have already secured commitments from top news, advertising and analytics providers. For developers it should mean that the same code can be used across multiple mobile platforms in order to make pages, regardless of content, load more efficiently. For users it should mean less time surfing and less data spent in the process.

This page provides more info on AMP and links to instructions and code for developing your first page using the specification.

Mobile payments

Mobile payment options and awareness grew over the course of 2015. However, in spite of that, usage appears to have remained low, so expect 2016 to be about how to reduce the gap between awareness and adoption.

Doing so will be as much about increasing opportunities to use mobile payments as it will be about encouraging people to use them. In reality, mobile payment is just another way to use your credit card. Once mass adoption kicks in that won’t be such a big deal, but right now what’s the incentive for people to use a mobile wallet over other payment options that are so well established?

Starbucks provided a clear value proposition with their adoption of mobile payments, incentivising use with special offers on their own products as well as those offered by strategic partnerships (Spotify and The New York Times for example). July last year, Starbucks CEO Howard Schultz revealed that mobile transactions accounted for 20 percent of all in-store sales; equating to more than 9 million mobile transactions a week and a 4 percent increase in foot traffic.

Others are also starting to do loyalty programs too: Android Pay started last year with Coca-Cola (awarding points for use against purchases at Coca-Cola vending machines) and Samsung Pay incentivised activation on their handsets in the US with a free wireless charging pad.

Fragmentation, point of sale compatibility and ongoing concerns about the security of payments are still hurdles to adoption, but despite those it does seem that mobile payments are now finally in a position to start gaining some traction.

Rising mobile video adoption

This article from last year covers a comScore report highlighting how people consume video on mobile devices.

It’s clear that improvements in mobile technology and the networks supporting them has reduced the barrier to people accessing what they want, whenever they want. For example the report shows that YouTube app usage rates on smartphones increased by 34 percent over the year, nearing 5 hours of viewing a month; on tablets it was an average of 9 hours a month.

As this trend continues over the course of 2016, advertisers, businesses and media will increasingly be able to reach people with richer content on an everyday basis. With this in mind expect mobile viewings to continue to rise with mobile advertising increasing as a result. Also expect these increases to be supported by greater use of video in all aspects of mobile, as people are now willing to watch more video content for longer on their devices.

Wearable technology

Yes, 2015 was still slow moving for wearable tech. Google Glass stopped selling and smartwatches were still absent from many a writst, but hey Rome wasn’t built in a day.

In truth, the biggest hurdle isn’t the technology, it’s social acceptance. Google Glass is apparently coming back this year, but it’s coming back as a tool for enterprise, which makes sense because used within industries with a legitimate business case Glass provides genuine value without the social stigma to worry about.

From a consumer perspective the sticking points for many are still design, varying standards, having to tether your device to a phone (in most cases), and of course the ever present poor battery life. All these are getting better though. Variety in design is growing, standards are maturing, both Apple and Google have taken steps towards making devices less dependant on direct tethering (through the introduction of Wi-Fi support), and devices with better battery life are starting to arrive.

It’s worth considering that eventually we will come to a point where we will no longer consider the delivery medium; we’ll simply expect to access information and services regardless of where we are, and that it will be delivered to us in the right context for that environment.

Think of television as an example. When I’m watching something at home with others (or on my own) I watch it on the big TV. When the big TV is already in use I use my tablet. When I’m on the train, squashed in with the other rush hour travellers, I look at it on my phone. That makes me sound like a TV junkie, but you get the idea. The point is I don’t even consider the hardware I’m using to watch what I want to watch anymore, I just use the right tool for the job without even thinking about it. Smartwatches and the Internet of Things will eventually become part of that ‘continuity of information access’ model for more and more people.

Perhaps one day we’ll have the Apple iEarBud whose UI is purely audio with voice control. You don’t think so? Give it time…

Internet of Things

IoT had a solid presence at CES, showing that there’s a real impetus behind moving it forward this year.

For starters Microsoft and Samsung announced a new partnership to integrate Samsung devices and software with Windows 10 devices, allowing the monitoring and controlling of household appliances through the use of apps that run on Windows.

Meanwhile, Qualcomm were putting a lot of focus on connected devices within the medical/health care vertical of IoT. Items on display covered diagnostics, therapeutics,  and physiological monitoring.

Kwikset’s Kevo Smart Lock turns a smartphone into keys for a home through the use of an accompanying app for iOS and Android devices. Keys can be set up and deleted and even temporarily given to other users to allow access on an ad hoc basis. Of course the main hurdle will be people’s concerns around security – Kevo will track and audit activity, detailing when eKeys have been locked, unlocked, sent out or accepted, but is that enough to make you feel comfortable with the concept?

Intelligent beds, smart shoes and other not so obvious items mixed with the more recognisable watches, jewellery and fitness trackers, showing that there’s a definite desire to explore the possibilities that a world of connected devices and apparel can offer. It was also clear that developers are now looking towards more focused areas of use whilst the larger consumer market continues to be slow to adopt; health care and engineering being the prime examples.

The knock on effect of a spreading Internet of Things will of course be even more data to consume and understand. Better and more effective ways of analysing the information as well as more accessible and creative ways of visualising the results will be needed in order to capitalise on the greater volumes of content streaming in.

Analytics and Big Data

And so we’re reaching a point now where Big Data is less a buzzword and more something that businesses actually do. The means to manipulate and analyse enormous amounts of information (that isn’t necessarily stored in one place) is going to be key to customer insight and maximising ROI. It’s also going to prove invaluable in improving the flows and processes internal to businesses as well.

As IoT, Cloud, and Big Data continue to converge, anyone that is still ignoring the importance of data and analytics, be it Big Data or the more traditional and established forms of analysis, is going to miss out on a vital source of intelligence and the means to understand it. This article on Big Data predictions for 2016 covers a lot of the points of interest.

Focus on customer service as the differentiator

As technology matures it becomes an equaliser rather than a differentiator. In some markets now the differences in the process you step through are minimal, regardless of the competitor you choose to go with. Where businesses have their web and mobile solutions figured out, expect them to start putting the focus back on the level of customer service they provide rather than the technology channel they use to deliver it;  and expect them to be supporting this with analytics and Big Data.

Virtual Reality

It seems to be the right time for VR; the tech being used is better, and unlike before people appear to be open to the greater breath of potential for it’s application.

Sure, it’ll start off being used for games this year, but this time round development frameworks for VR are more accessible and users have already dipped their collective feet into the concepts of augmented and virtual reality via their smartphones (Google Cardboard for example). What this should mean is greater interest, quicker adoption and more diverse application of the technology.

It also doesn’t hurt that the likes of Facebook are backing VR with their investment in Oculus Rift. They aren’t the only ones; Sony, Samsung and HTC are also coming to market with their offerings too.

Digital assistants, robo advice and AI

As Siri, Google Now and Cortana continue to improve, and as consumers search for information online more and more, we’ll be seeing “robo helpers”, online guided advice and the application of AI driven systems become more commonplace.

In any instance where advice can be automated, there’s a potential to apply technology to augment the supply of that advice to the end user. There’s good reason to consider it; automated services can allow your workforce to service more customers without a drop in quality, and it does so by allowing you to intelligently focus effort on the areas of customer service that deserve human involvement whilst leaving the simpler solutions or decisions to systems that can deliver them with minimal or no human intervention.

Used intelligently, and blended with access to a human end point as needed, automated advice systems will provide customers with efficient access to knowledge and help whilst simultaneously allowing businesses to reduce costs and scale effectively. In addition, integrating these mechanisms into existing processes will allow businesses to pivot their delivery model towards Millennials and Generation Z as they become the key demographics to target in the not so distant future.

Millennials and the drive for a seamless digital experience

As mentioned above, the expectation for continuity across digital platforms, and through to the associated real world touch points, is going to become increasingly more important.

Although this appears to be the case across an increasingly wider range of age groups than some would have expected, it is the tech-savvy Millennials that are driving this need the most, both with their technical knowledge and their assumptions of what should now be the norm; and as they represent such a high percentage of those who spend, should you really be ignoring their needs?

An increasing amount of people will check online before making any kind of purchasing decision, be it on a product or a service. Right now that means many consumers expect a brand to have a mobile presence, with all the associated features and advanced capabilities that it implies.

However, whilst that may be the focus right now, especially whilst the Internet of Things and wearables are still moving through their slow early days of adoption, we should prepare for the defining desire to be for a seamless multi channel experience, for an experience that is accessible at the point of need rather than dependant on a particular device. So, be prepared to start thinking about how you are accessible via desktop, mobile, tablet, TV, watch, insert you favourite home appliance here; as it’s only a matter of time before this becomes the expectation that mobile is today.  Understandably frameworks that help developers to easily do this are going to be a godsend moving forward, so look out for these in 2016 too.

When it comes to linking the online with the real world, technologies such as Apple’s proprietary iBeacon, Google’s Eddystone, as well as other more open and diverse approaches such as wifi and location triangulation, are attempting to bridge the gap. Feedback on how well these achieve the purpose though differs depending on who you ask,  so I’ll be looking to explore this further in a blog entry later this year.

And if millennials aren’t a good enough reason on their own to pay attention to your seamless digital experience, then consider Generation Z. Where Millennials embrace technology, Generation Z will be the coming of the true digital native, since all they have ever known is a world of mobile apps, touch-screen devices and a landscape carved from social media and a reduced concern about privacy. For them the boundary between real world and the online world is missing – it’s simply one world. Their idea of a seamless digital experience is not that it’s an added bonus. It’s expected. It’s a necessity. Will you be in a position to provide them with it?

Cyber security

There was plenty in 2015 to draw people’s attention to the topic of cybersecurity, and as potential threats continue to mount, it’s no surprise that cybersecurity is front of mind for many businesses as they look for ways to analyse potential weaknesses and protect themselves. SMBs particularly, are finding themselves the target of attacks more and more as criminals look for those they expect to have less security in place.

For companies with fewer IT resources, the threat of cyber attack is likely to push them in two directions:

  • Firstly the use of cloud based solutions that incorporate cybersecurity utilities and approaches that wouldn’t otherwise be available. Cloud providers are recognising this need and increasingly investing in technologies for data protection, network security, threat modelling and quicker incident response
  • Secondly companies, both small and large, are recognising the benfit of collaboration in order to share lessons learned and to establish best practices that will see all those participating better equipped for the threats that lay ahead.

Companies also need to provide their employees with awareness training. It’s important to remember that despite the increasingly sophisticated ways in which systems are being breached, it is often the older, tried and tested low tech approaches that are still the most effective. Phishing for example continues to be used, and the only way you combat that is by educating your workforce. Also, staff need to know the procedures that need to be followed in the event of a breach occurring; what steps to follow to get the systems back up and running and what is the right way to communicate the situation outwards to those it could have an effect on.

Wearables, mobile and the the Internet of Things are not going to help. Unsurprisingly the more connected devices in use by businesses and consumers, the more vulnerabilities there are to exploit. With that in mind it’s likely we’ll see more major data breaches through the course of 2016.

And of course the increase in incidents will increase the focus on security and privacy regulation. Right now attention is on Safe Harbour 2.0, as EU officials meet with US counterparts in February to find a common position regarding which legal channels companies can use to transfer data across the Atlantic.

WWDC and Google I/O 2015

I normally pull together my list of highlights from Google’s and Apple’s major events a lot sooner than this, but this year neither had the immediate big pull or wow factor moments that the last couple of years did. Both events felt much more incremental in what they presented, with few surprises. Instead we saw expected advancements for services that were already in place, and OS evolutions of features that were already provided elsewhere, either by third parties or by rival platforms.

As such this year felt very much about updates, improvements and honing. While these, perhaps unfairly, don’t always have the same immediate impact as big centre ring reveals do, they do arguably impact on the lay of the land just as much, albeit over a longer, more drawn-out period of time. So, here is the list of announcements that I’ll be following the progress of through the rest of 2015 into 2016.

WWDC

Apple Pay
Obviously this is a big one even though it was known about already, and especially for me because they announced the UK date. It has been a bit of a patchy start for the UK, but the list of participating banks now includes (as of 7th August):

  • American Express
  • first direct
  • HSBC
  • mbna
  • Nationwide
  • NatWest
  • Royal Bank of Scotland
  • Santander
  • Ulster Bank

Bank of Scotland, Halifax, Lloyds Bank, M&S Bank and TSB are listed as coming soon. No sign of Barclays in the UK although they have been quoted as saying it will be supported ‘in future’.

Check out this link for further details on the Apple Pay UK launch. For help with setting Apple Pay up see here.

As convenient as Apple Pay can be, be aware of (not so) edge case failures, such as if your battery dies whilst you’re on the tube – you could find yourself being charged £8.80 for a single trip.

iPad split screen multi tasking in iOS 9
We’ve been expecting this to come for a while, given Microsoft’s support of it on their tablets and how well that works, so it was good to finally see this arrive at WWDC 2015.

SlideOver, Split View, and picture-in-picture are the three flavours of advance multitasking that Apple will finally add to their mobile OS. These are supported by a new task switcher that you access by double tapping the home button.

Slide Over allows you to swipe in an app from the right, which will then take up a third of the view whilst pausing the main app you were currently using. So no, it isn’t true multitasking, but it is a quick and convenient way of accessing apps that are developed to support this functionality, whilst not coming completely away from the main app that you are using. If multiple apps are available that support this feature you can change the app you’re pulling in by swiping down from the top of the screen to scroll through the selection. You will need at least an iPad Air or iPad mini 2 to use this feature and the new Picture in Picture functionality (which allows you to watch a video or take a FaceTime call whilst continuing to utilise another app).

If you have an iPad Air 2, then you can also use the new Split View feature, which does allow true multitasking. This is accessed by pulling a Slide Over view even further across the screen in order to take up half of it. Split View allows you to use each app independently of each other at the same time, dragging and dropping between them as required, and using a four finger swipe in order to change what app is in each half.

Apple’s new search features
Apple’s Spotlight and Siri search functionality will be benefitting from the introduction of ‘natural language’ technology, deeper linking into apps, and more immediate, pushed based, context sensitive search results.

With natural language functionality you can speak a sentence in a much more comfortable way and Siri should be able to discern what you mean and come back with appropriate results.

With deep linking, developers will be able to index content within their apps and websites so that it is then eligible for inclusion in search results pulled together by iOS 9. For example this might include top stories from a website, or recipes from a cooking app etc. – whatever your content or service may provide, if you can index it it can be included for consideration in searches.

The new Proactive Search functionality means your search view will be context aware, pre-populating itself with content based upon key factors – the entries in your contacts list, what apps you have installed (along with an awareness of when they are commonly used) and your current location (in conjunction with the Maps app). Used together, these will all contribute towards generating meaningful information and suggestions as you need them.

With regards to development check out Apple’s App Search Programming Guide. Also worth a read is this article, which covers difficulties found with setting up universal linking in the recent iOS Betas (universal linking allows either an installed app or website to be linked to as required).

App thinning
With the help of the App Store and OS, developers will now be able to optimise the installation of iOS and watchOS apps, with the deliverable tailored specifically to the capabilities of a user’s device, thus ensuring a minimal footprint in each case regardless of the spec of the phone or tablet or watch. It should ensure that apps are optimised to use the most features possible per device installed on, whilst also occupying the minimum disk space necessary. Faster more tailored downloads and more space for apps on your device can only mean a better user experience.

See Apple’s app thinning guide for how slicing (creating and delivering variants of the app bundle for different target devices), bitcode (an intermediate representation of a compiled program that can be used to re-optimise your app binary in the future without the need to submit a new version of your app) and on-demand resources (resources such as images and sounds that the App Store can host and manage the timely download of for you) can be used to achieve faster downloads and initially smaller app sizes that will improve the download and first-time launch experiences of users.

watchOS 2
Version two or Apple’s watch OS brings a number of updates and additions with it that make the Apple wearable proposition even more interesting.

As with Android Wear, the Apple Watch previously ran the apps on the phone with the device on your wrist, in most cases, simply acting as a display. Now we have the ability to run apps directly on the watch itself. It should mean less lag in some situations and the possibility of further innovative uses for the watch, but it remains to be seen how the battery will hold up under prolonged use of apps installed directly onto the device and not onto the phone – experience with the betas so far is mixed to say the least, so we shall see. Native apps will be able to access the watch’s microphone, speaker and accelerometer. They will also be able to utilise the digital crown control for custom UI elements – all things developers couldn’t do with the first version of watchOS (as third-party apps could only run on the phone and communicate with the watch without the ability to utilise the watch’s hardware).

Developers will also be able to create custom Watch complications now. A complication is a custom watch face designed to display a combination of useful (or perhaps not so useful) chunks of data. When designed well, a complication can do a great job of surfacing data and making it easy for a user to take in at a glance.

Apple are in fact opening up a lot to developers now with watchOS 2: video playback, contacts, the Taptic Engine, HomeKit, and more. It’s like finally being given the keys to the candy store. However, it remains to be seen if developers, given this new level of freedom, can create more meaningful, more responsive and more useful apps than some of those that we have seen up to now. I’m sure it’s only a matter of time (no pun intended).

Apple News
Apple News replaces Newsstand and will work in a similar way to Flipboard, allowing users to customise the way items are displayed. It will keep track of topics and news services that are of interest to the you, ensuring relevant articles are collected and displayed. A number of known publishers have already signed up for launch.

Will Apple News kill off the likes of Clipboard and Pulse? On the surface it’s not particularly different from the rest. It has a head start though in terms of being built into the OS, so if it lives up to expectations will people go else where for their news service when they can get it here? It is interesting to note that Apple are intending to curate their content, looking for editors who can help them identify and deliver the best content for readers. Flipboard too, is highlighting their use of curation and managed content in the wake of Apple’s announcement.

Apple News is available across all devices, but like Flipboard, really looks best on the iPad.

Google I/O 2015

Android Pay
Google Wallet didn’t pick up steam as hoped, so will Android Pay get it right this time around? Available to phones running KitKat or higher, it will provide the user with the means to make purchases in apps and through the tapping of NFC sensors, and if your phone has a fingerprint sensor (see below) you can use that to authenticate the purchase. Google tell us that the service will work at 700,000 stores, however there is no news on when the service will be available in the UK. It’s not clear if it is the potential amendments to UK and European law that are holding up a UK launch or if it is something else, but with Apple Pay already over here paving the way, and with the huge potential for mobile payments starting to come into it’s own, Google will want to progress with this as quickly as they can.

App permissions
When you buy an app on Android, you’re greeted with a list of permissions that to many mean very little, but are in fact really important for understanding what you are allowing an app to do and have access to on your device. Android M (Google’s latest version of the Android operating system) brings with it a revised list of permissions, smaller and easier to understand than what is presented now.

Permission requests will be presented to the user on launch of the app or on the first instance that a permission is required, rather than on download of the app as it has been up until now. For some this will be quite a change, as the permission list can be the final decision maker in downloading an app from Google Play. Nevertheless, you’ll be able to revoke permissions on an app by app basis, so the control is still very firmly with you, and interestingly this is much more akin to how iOS works in terms of handling permissions.

Power management
USB-C support comes with Android M to provide faster charging times. Obviously you need a device with the new socket, and on the surface it may seem like a nuisance having a new connecting cable to deal with again, but USB-C brings with it a whole bunch of other supporting features that can be utilised if an OS supports them. See here.

Back to power management though; Android M will also come with a feature called Doze, which is designed to prevent apps from draining your battery when you’re not using your phone – this is done through detecting a lack of motion in the device and setting a number of policies as a result. In the past battery enhancements have had to rely, to varying degrees, on developers doing the necessary on their side. With Doze however, Google are much more in control of the actions being taken and the enforcing of them. During a Doze state your phone will:

  • Disable network access for everything other than high priority Google Cloud Messaging
  • Ignore Wake locks (indicators that an application needs to have the device stay on)
  • Disable Alarms scheduled with the AlarmManager class, except for alarms set with the setAlarmClock() method and AlarmManager.setAndAllowWhileIdle()
  • Prevent WiFi scans from being performed
  • Prevent syncs and jobs for your sync adapters and JobScheduler from running

When the device exits Doze state, it will execute any jobs and syncs that are pending, so it is still down to the developers how missed notifications will be handled once a device becomes active again. Google have provided a testing guide for developers here.

Offline modes for Google Maps, Chrome and YouTube
A full offline mode is coming to Maps, meaning you’ll be able to save maps and use them in the same manner as if you were connected – viewing places, looking at route details and stepping through turn by turn navigation.

With Chrome you now have the option to view a saved copy of a web page if you’re facing connection issues i.e. if you’ve fully or partially viewed a page already you can still look at it if you subsequently loose connection.

YouTube will provide an offline feature as well, available on KitKat (Android 4.4) and above, that will allow you to save a video offline onto your phone for 48 hours – great for the underground commute or long car journeys with the family.

Google Now on Tap
Google continues to develop it’s ability to push useful, pertinent information to you, at any point, with the introduction of Google on Tap. Essentially it’s Google Now without having to leave an app.

Hold the on-screen home button and a view slides up containing the knowledge graph of information based upon where you are in the OS or what app you are in, and provides a set of shortcuts to apps and content that should be useful. Combining this with Google Now’s ability to understand very non specific questions using voice search based on the context they are asked within, means this can be a very helpful and easily accessible feature.

Take a look at the use case covered in this article to get a feel for how this might work. It will be interesting to see how this and Apple’s new search features will compare over time.

Android Wear
Apple Watch has stolen a lot of attention away from Google’s wearable OS recently. To many it appears to be taking the lead in terms of what wearables can do. Those better acquainted with Android Wear may feel this isn’t the complete truth – regardless I feel Android Wear could have benefitted from a bolder display than what Google gave us this year, in order to reassert itself (especially given the head start it had in being available to the public). Here’s what Google showed in order to play catch up:

Always on apps – apps can continue to be active even in the low-energy display mode, ensuring that their information can be easily glanced at whilst on the go. Examples include checking the distance covered whilst on a run, looking at a shopping list and checking what else you still need to get, or following navigation directions using Google Maps. Have a look at the developer resources to see how to incorporate this into your Android Wear apps and to see how they will perform on older versions of the OS that do not support apps running in ambient mode.

WiFi and GPS – the latest version of Android Wear will allow your smartwatch (if it supports it) to take advantage of WiFi connectivity in order to deliver notifications and other actions to your wearable, even if you don’t have the phone that it is paired to with you. This means that if you were to leave your phone at home, on your desk, or in your car (heaven forbid), your Android Wear smartwatch will continue to bring you content as long as it has an accessible WiFi connection. Your phone, wherever it is, must have an active data connection as well, be it via WiFi or a mobile data network. This article goes into some further detail on how it all works together.

Wrist Gestures – you can now navigate through notifications and check them in more detail via a flick of your wrist. For example a flick outward for next, and one towards you for previous. Work has been done to reduce the possibility of false positives, however you can turn this functionality off if you’re the kind of person that talks with their hands and you feel that this kind thing is just asking for trouble.

Draw Emoji Characters – drawing an approximation of an emoji will see it converted to an actual image or response for the app you’re using, such as responding to a text. If your drawing skills are next to zero, there is a pull up list you can utilise instead.

Chrome Custom Tabs
At present, clicking on a web link from within an app means either loading the Chrome browser (which is a heavy context switch that is not at all customisable) or using a basic in app web view in order to develop your own custom browser solution. Google announced Chrome Custom Tabs to provide a better more practical alternative to these – allowing a developer to open a custom Chrome Window on top of the active app instead of having to open the entire Chrome app separately. You can set the toolbar colour, entry and exit animations, and add custom actions to the toolbar or overflow menu. Additionally you can pre-start the Chrome tab in order to pre-fetch content to improve the loading speed when the link is selected.

Fingerprint sensor support for future Android devices
Android M has built in support for future Android devices that will come with a fingerprint sensor. Unlocking the device, authorising payment in the Play store and through Android Pay, will all be possible via a fingerprint scan. Developers will be able to use it within their own apps too.

Cloud Test Lab
Testing is a difficult thing to conduct for Android apps, given the multitude of devices out there combined with the unfortunate variety of OS versions that they still run. To combat this Google announced Cloud Test Lab, their Android app testing service. When a developer submits an app through the developer console staging channel, Google will perform automated testing on what it considers to be the top 20 Android devices from around the world. Testing of this top 20 is free, though Google eventually plans to extend the service beyond the top 20 devices via a service that developers can pay for.

If the app crashes on any of the selected devices whilst walking through it, a video of the app leading up to the crash, along with a crash log, is sent to the developer to help them with debugging. Obviously the service will also help developers recognise layout issues on devices they may not have the means to test on normally – something that will benefit both Android developers and users in the long run.

This year it’s all about optimising and keeping up with the Joneses…
It’s evident that Apple and Google are doing a lot to improve their mobile OS offerings with this year’s iterations. Right now the focus on evolving the feature set and refining what is already there seems sensible, but next year we’ll all be expecting a lot bigger bangs from both.

Handling project risks

Risk. It might be a four letter word, but it certainly isn’t one that you should be avoiding in projects. It shouldn’t be brushed under the carpet. Risk is not an indication that a project has gone bad – when risk is being proactively identified and considered by the team as a whole, it allows the right actions to be taken by the right people to ensure the project’s progression. Dealing with risks, issues and problems on a project is not an exercise in appointing blame, it is an exercise in working together to secure the success of the project for everyone in the team – to that end transparency and honesty should be the watchwords of all team members.

Not all risk can be removed from a project, but steps can be taken to recognise it and to make decisions on how it should be handled or not handled. Maximising opportunities that arise and minimising the impact of threats should be kept in mind at all stages, be it in the defining, developing or delivery of a product.

Here are a few things to keep in mind when managing risk on your projects – remember, although the Project/Delivery manager may be the central co-ordinator responsible for communicating and managing risks and opportunities, the team as a whole should be considering the risks that arise as part of what they do, and be working together to report them and act upon them – to this end the list below is useful to everyone on a project.

A lot of this may seem like common sense, but in the midst of a project things can get easily missed – it’s good to return to the basics and refresh your mind on the simple yet effective steps that you can take…

Risk Management is a part of ANY project you work on

It’s just common sense that evaluating and managing risk should be a part of any project, but it’s easy to let it slip, be it through complacency, ignorance, over confidence or even lack of time. It doesn’t matter how simple a project may seem or how routine it may be, potential issues should always be given due and relative consideration. I can’t think of a single project that you should be confident enough in that you can assume there are no risks to be at least considered, if not necessarily monitored or mitigated.

Also as mentioned above, risk awareness should be considered a task for all team members. It shouldn’t be assumed that the project manager is the only one that has to care. As a team you’re reliant on each other to raise the risks that arise in your areas. Everyone is key in raising concerns from their areas, and for providing the essential information required in order to make informed decisions on them. Processes and routines, such as daily stand ups for example, can help make risk/blocker reporting habitual and part of the team’s day to day routine operation. Working as a team and bringing everyone together regardless of function can help to remind everyone that you’re all in the same boat, working together towards the same goal of success (and will help to promote transparency, which is essential to good communication).

Identify risks as early as you can

The earlier you can identify risks the better. In some cases there will be no way to see it much before it arises, but regardless you should use all sources available to you in order to identify issues where at all possible. Broadly speaking this will be either via documents or data that exist in one form or another, or directly from people that are in your project team, from those you’re collaborating with on the client side, or from other key stakeholders – it could in some cases also be via other people not directly involved in your project, but who can offer insight into what you are working on.

The point being is that you need to have an open mind towards sources of information in order to recognise the breadth of inputs and knowledge that you have at your disposal. Never be too arrogant to think you can recognise all of the issues and options that may arise by yourself – a Japanese proverb I keep in mind is “Together we are smarter than any one of us”. Working as a team increases the chances of recognising risks early as well as the available methods for addressing them.

Stand ups, meetings, workshops, sprint planning, reviews, project plans, business cases, user stories, resource estimates, retrospectives of previous projects/sprints, company wikis, and so on. Using all that is available to you will increase your chances of identifying risks and opportunities sooner, and give you more time to address those that will blind side you no matter how much planning you’ve done.

Make sure risks are communicated

To quote Jerry Maguire “Help me, help you”. When you’re managing a project, a lot of what you can do is dependant on what people tell you; on what people do to allow you to help them. There’s nothing worse than a team or a client that doesn’t communicate to you that essential piece of information that would have allowed you to lessen the effect of that one risk you didn’t, or couldn’t, see coming.

I think it’s really important to let team members, clients, stakeholders, etc. know that if they have anything to communicate about any issues or problems that have come to light, to do so as soon as they can. It can be strangely difficult sometimes to make people understand that there is nothing negative about raising risk – in fact it’s extremely positive. The moment a risk is a known entity is the moment something can be done about it. Communication is everything and you need to encourage and nurture it in every corner of the project.

Make reporting risks, blockers and concerns a key part of communications – make it understood that it is an important aspect of the project, and make it easy to do so through the rituals and the tools you use. Where possible adopt a “go see for yourself” attitude and do what you can to increase your understanding of what is being faced. Provide those in your team with another avenue of risk reporting by making yourself accessible and approachable. Help get the message across that risks are an important aspect of a project and that communicating them is massively important. If you are adopting an iterative approach and utilise daily stand ups/meetings/call them what you will, then these will provide a natural point for your team to be able to communicate risks that they have identified.

Additionally, consider how you will communicate risks from your side back to the client or sponsor – ensure what you communicate is succinct and understandable in order to ensure that the client can make as informed a decision as possible.

Ensure ownership of issues is clearly assigned and understood

As important as recognising a risk, is determining who has responsibility for actions that need to be taken for it. A risk that isn’t ‘owned’ has no impetus for action. Cover next steps in your stand ups and set completion criteria. Monitor progress via the tools your team uses.

This is important not just within the team, but when working with a client too. Sometimes the risk sits squarely with the client, meaning that the responsibility to address it, along with the impact in terms of cost, quality or scope, lays squarely with them. Maybe they need to assign resource on their side in order for you to complete design, or maybe APIs need to be in place in order for sprints to progress. Clarifying the ownership of risks and clearly commuicating the effects of them to the client or sponsor can really help to prompt action where it is needed. Those whose bottom line it effects will start to pay attention when it’s made absolutely clear the ownership sits with them.

Prioritise a project’s risks

Not all risks are made equal. Risks can be assessed in terms of the level of impact and the likelihood of occurrence. There are many similar diagrams on the internet that clearly layout this concept and you can easily search for them.  For example:

fig11-2

risk_matrix

Determining impact and likelihood will aid you in prioritising which risks need the most attention and when. Not all risks are equal. Also consider how far away the risk is in relation to how much effort will be required to attend to it. Any showstoppers with the potential to halt a project completely should demand a greater priority, especially if the likelihood of them happening is high. In any case measure consistently and prioritise appropriately.

Analyse risks you’ve discovered

Whenever possible look for the root cause of a risk and truly understand why it is occurring. The more you can understand the underlying reasons for a risk arising the better your response will be, and the better prepared for similar ones you will be in the future. Using an approach like 5 Whys (an iterative question-asking technique used to explore the cause-and-effect relationships underlying a particular problem) can be a simple but effective way of uncovering the real issues behind a risk and it’s resulting effect.

Use this in conjunction with the build, measure, learn principle advocated by the Lean Startup methodology and you’ll hopefully have an approach that will allow you to get to the root causes of risks and take those learnings forward to ease the avoidance or mitigation of similar ones in future projects.

Some risks are simply a part of the project by nature. In these cases or in cases where only so much can be done to reduce the impact of the risks that exist, you need to analyse what the cost will be to the project and what factors, if any, will effect the magnitude of the effect: changes in budget, quality, time and resource can all have measurable implications that need to be relayed back to the sponsor.

Plan and Implement your responses to risks

Once you’ve unearthed and analysed the risks, it’s time to plan the actions required to reduce their effect on the project. What can you do to either remove or minimise the risk or, failing that, accept it and communicate it to all concerned stakeholders in order to prepare them. Taking the previous steps to discover, understand and evaluate risks will hopefully allow you to formulate the best response possible in each case.

Can you re-organise a project in order to avoid a risk altogether or to at least minimise the impact it could have? Change a chosen framework or API that you’re using, change at what point particular work is done within a project etc.

Sometimes accepting the risk and the impact of it on the project can be a valid option, if no other path of mitigation is available and especially if the effect on the project is minimal or the amount of effort, time or cost needed to influence it is in fact more detrimental than the risk itself – in these cases communication of the risk is paramount in order to assure that this choice is a conscious one taken by those in the position to do so. Perhaps the effort, time and cost of getting a particular feature into the next build of your app is not worth the risk of impacting other features sitting in the same sprint. Is it more acceptable to acknowledge that a particular feature is unlikely to make the next build if it means no other items will be impacted and testing/QA can be completed in the usual manner? You may find in some cases it is.

Record risks and track their tasks

Whatever tools or processes you use, risks should be logged in order to capture ownership, progress and the decisions made by key stakeholders on them. In doing so you can chronicle, assess and communicate the progress made on them, as well as prevent any that were raised and discussed during meetings from being forgotten in the midst of everyday routine.

Logging risks is essential. Risks are a part of project life and are to be expected. Recording them allows you to retain the details of all essential tasks taken to handle them, and in the event of questions being raised further down the line, you have to hand a record of key points that you can track back through.

How you track risks compared to the tracking of their associated tasks depends on the tools you use for your projects, but in any case you want to track the status, importance/priority and likelihood of risks, and with regards to the tasks for those that you’ve agreed to take action on, you need to record the progress, the changing circumstances over time and the effect of the tasks on the risk itself.

Accept risks are a part of the landscape, deal with them, and continue to improve your approach over time

No matter the style or methodology of project management that you use, risks will always be a part of what you need to deal with; and you do need to deal with them. They are not to be ignored and they are not to be handled in isolation. Honesty and transparency within the team is hugely important. Do not underestimate the power of a team that is communicative, collaborative and energised to work together to face issues. Going hand in hand with that you must ensure that you track the risks that are there. Recording risks and tracking tasks will facilitate the steps your team take to analyse the problems identified and action the right responses.

Finally, remember to hold your retrospectives – measure, learn and continue to refine the processes you use as a team. Create a lessons learned list/wiki and share it with everyone. Don’t let hard earned lessons slip away or be forgotten. There is almost always something that you can do better or more efficiently, so never stop that cycle of continuous reflection and improvement.

What does 2015 hold in store for mobile?

It’s been a good year for mobile and there’s no reason to imagine it being any different in 2015. Certainly from my perspective the use of mobile doesn’t seem to be abating in any visible way and there’s a lot around the corner to be excited about. Here’s my list of what I’m looking forward to in the New Year and beyond…

Consistency and continuity across devices

One of the headliners for WWDC, OS X Yosemite and iOS 8 this year was the idea of seamless continuity across your Apple devices – in essence the ability to connect and move from one device to another with unprecedented ease. Although the landscape is noticeably changing, right now we’re still working within ecosystems that necessitate multiple devices, each suited to specific contexts and situations, so the idea of being able to start work or watch a video on one and when needed be able to immediately pick it up and continue it on another without an ounce of faff, is very appealing.

Apple have approached this through the introduction of features such as Handoff, personal hotspot and SMS/Phone Relay, which focus on intuitive an uninterrupted interaction between devices without the need for numerous arbitrary actions or lengthy setup. Check out this link for an overview of Apple’s approach to continuity and expect to see Handoff capabilities appear in more and more apps as developers get to grips with it.

To be fair, Handoff isn’t anything innovative in terms of the functionality being used. Google offers browsing and tab synchronisation through Chrome browser extensions and lets you access draft emails over various devices with its ability to do real-time saving in Gmail. What Apple does though is cleverly design the way in which the user can utilise these features, providing them in a way that is seamless and sensible, which straightaway makes it more useful.

As this approach improves over the coming year it will become more expected by users and seen less as added value – when consistency and continuity across devices becomes taken for granted that is the point when we’ll know it has truly been adopted and is successfully working as intended.

Smartwatches and smartwear

As mentioned in a previous post, smartwatches really speak to me in a nostalgic childhood wish fulfilment kind of way. Despite that, and having spent time with the Android Wear SDK and a couple of the devices themselves, I still haven’t bought one yet. Why? For me the devices still need to be that bit better… or a lot cheaper… or both! Better battery life, amongst other things, stopped me from being enticed by the Moto 360 (Update, 20/05/15: since the latest software update I’ve found the 360 battery life to be a lot better, and under ‘normal’ usage it lasted me a day and an evening, which for me is the least I’d consider acceptable right now). Hopefully 2015 should see smartwear improve enough for me to finally purchase something, whatever it might be.

Although as consumers we don’t seem ready for head mounted wearables such as Google Glass, it’s interesting to see that pick up of both that and similar wearable devices is increasing within industries where the ability to get and send information whilst keeping you hands free actually makes them desirable.  This definitely looks to be a growing space through 2015.  For the person on the street though it seems focus will remain firmly on items that can be worn without drawing crazy attention to oneself – so keep your eye on the second round of Android Ware driven devices and of course the Apple Watch (which, although clearly first gen, can’t help but add credibility to the idea of us incorporating smartwear into our lives).

What comes after wearables?  Well that would be embeddables, but hey let’s take it one step at a time shall we?  Check out this list of influences shaping the wearable landscape.

Mobile payments

Apple Pay, Google Wallet, PayPal in-store, Current C and many more – the options and buzz around mobile payments is increasing. Mobile payments are clearly starting to gain wider acceptance and adoption, and there is no reason to expect that this trend won’t continue through 2015 as interactions between business, financial institutions and consumers increasingly occur via mobile devices – we are at a point of convergence between the new digital ways for completing transactions that retail and business are now offering, and consumers new found willingness to experiment with them. Combine this with the use of iBeacons and similar technology in order to provide people with location based purchasing opportunities, and you have a powerful vehicle for targeting the right people at the right time at the right location, whilst making sure that they have the ability to purchase unhindered.

There is an undeniable cool factor around mobile payment technology right now, and stats coming out suggest that whatever option is available to you on your device you’re likely to be interested in giving it a go at some point or another this year. Apple finally brought NFC to their devices with the the iPhone 6 and 6 Plus (for payments only though, nothing else) and are pushing adoption of Apple Pay hard, which rather interestingly has helped increase awareness beyond just Apple users – for example it has had a positive knock on effect on the uptake of Goggle Wallet, which has actually seen usage grow since the launch of Apple Pay.

Swift

It will be interesting to see how Apple’s new Swift programming language continues to be adopted for iOS apps during the course of 2015. It’s a great step forward over Objective-C that should make creating apps for the platform smoother and less daunting for the first time developer creating native apps for the platform.

Worth considering though is the increasing desire to create apps that are cross platform.  Increasingly I’m talking to people that want to write once and distribute anywhere, and whilst historically cross platform frameworks have not delivered the goods they are definitely coming of age.  I explored going cross platform on mobile earlier this year and the options available for cutting down the effort, time and cost whilst being able to target multiple markets.  Knowing this trend is happening you have to ask the question, will Swift have the impact it might have had a few years ago?

That said there are still apps that gain an edge from being developed natively, games being the obvious example. In this sense Swift should do a great job, the question being how long will it take for developers to adopt?  Apple obviously aren’t going to make developers change languages over night – many have spent years learning the ins and outs of Objective-C and have made it their livelihoods, so Apple aren’t gong to be shutting the door on it anytime soon.  Shifting to Swift will be about the benefits it has over the older language, and many of those will come from the fact that it isn’t built on top of another language (unlike Objective-C that was built on top of C) – because of this Swift should make creating apps for iOS easier, should produce more efficient code and ultimately give developers a more productive experience when creating apps.

I’ve not had the chance to spend much time with the language yet, but the syntax looks to be better and the developers I’ve spoken to so far confirm that it is a big improvement in many ways (let me know if you think otherwise) and you can indeed do things in a line or two of code that would take more to do in Objective-C and be a lot less concise.  Whichever way you cut it Swift looks like a good step forward for Apple from a development perspective, but the interesting thing for me right now will be to see how quickly it is (or isn’t) adopted for developing apps over the course of 2015.

Bigger screen phones and the potential of foldable screens

Do you remember the Dell Streak?  The first of it’s kind in 2010, this 5 inch device was way ahead of the curve in terms of screen size. We now live in a time where you wouldn’t bat an eyelid at a phone sporting a 5 inch screen – long story short we now have greater acceptance of mobile devices with larger screens, which is hardly surprising given how much more online activity we’re all willing to do through our smartphones now, and how much a larger screen makes that experience more comfortable.  Apple obviously feel the time is right too with the introduction of the iPhone 6 (4.7 inches) and 6 Plus (5.5 inches) this year.

The flip side of the coin is convenience – if the screens get bigger where will we put our phone?  Will we have to put them elsewhere or will we see trouser pockets getting bigger? 🙂  That’s why I’m interested in the research that’s being done into foldable screens.  Both Samsung and LG have started research into foldable, bendable, rollable screen technology and expect to have devices that utilise them during 2015 and beyond. Check out LG’s roadmap:

LG-Roadmap

For me this is a big deal. Big screens offer a great experience for many activities, but they can then hamstring the portability and convenience of the device.  Imagine a device that utilises a foldable screen with no hinges or breaks ruining the continuity of the screen.  Imagine a device that can fold down to the pocketable size of a smartphone when carrying it or using it for calls, but can then fold out to provide a larger screen for video and productivity based endeavours when necessary.  That’s the kind of device I’m looking forward to.

The Internet of Things

After much talk about the Internet of Things it’s finally starting to feel like 2015 will be the point when we actually start to see it touch upon our lives a lot more. Wearable tech and home automation seem to be the two ‘things’ with the most momentum and attention right now. Combine these with the growing integration of beacons and NFC into items and areas that will pick up on who we are and where we are, and you can imagine how the three will interact with each other to provide a rich, targeted experience.

There’s still a long way to go.  The speed at which we integrate into more and more items will be dependant on the continued improvement of low powered chips that can make it a reality, big data companies improving the ways in which all this new data can be handled and understood, and perhaps most importantly how we handle the serious question of security. How would you like your car being hacked or someone half way around the world being able to mess with your thermostat settings?

All that said, you won’t stop the Internet of Things from happening and 2015 will be the year where we start to see it entering the mainstream.  Apple has HomeKit, HealthKit, CarPlay and Apple TV.  Google has its Nest acquisition, (the multi OS) Google Fit, Android Wear, Android Auto and Android TV.  It’s clear that both of them are making sure they are set for integration beyond the world of phones.

Enterprise mobility

Mobile first isn’t just a trend in the consumer space, it’s clearly changing the landscape within enterprise too:  the need for mobile-centric CRM, the continued march of BYOD and CYOD along with the analysis and control of the data being accessed through these devices – these are all going to be key items of attention for businesses during 2015.  Check out this article for more detail on the above along with thoughts on Microsoft Office for iOS and Android becoming a mobile productivity standard and the increased adoption of enterprise app stores for managing and securing app usage.

Google’s Material Design

I expect we’ll see that Google’s Material Design will be implemented in more and more websites and apps during 2015, providing a simple, easy to apply framework that should in theory speed up design for developers and remove some of the common pitfalls. It will also be interesting to see if anyone else comes up with alternative frameworks for doing the same thing.

The card design standard

Mobile is clearly responsible for the idea of cards as a design approach. Google Now, Apple Glances, Twitter, Pinterest and Facebook’s Open Graph are all examples of how the card approach can been utilised, and we’re at a point where the concept really does make sense given the proliferation of small screen devices.  Ranging from the ubiquitous smartphone to the newly emerging smartwartch and on towards the Internet of Things – designs based around cards will really make sense during 2015 wherever you find a small screen interface staring up at you.

Project Ara

Will 2015 be the year that smartphones go modular? Are we at the point in mobile evolution where we’re jaded enough to be ready for something as different as Project Ara? Do you want to be able to build your own phone by swapping out components you don’t want and inserting better specced ones for the functions you do care about? Do you want a physical keyboard? Longer life battery and/or a faster CPU? Better speakers? In theory you should be able to take your pick. I’m not sure if we do want this or not, but I’m excited to check it out. Ara is expected to arrive at some point in 2015 and you can read about it in more detail here.

And there’s more than that too…

Of course there’s more than the above for us to look forward to in 2015, but these are the things in and around mobile that have really caught my attention.  What are you looking forward to?

This will most definitely be my final post of 2014, so with that in mind I wish you all a very happy and magical New Year.  Here’s to 2015.

“Learn from yesterday, live for today, hope for tomorrow.” — Albert Einstein

The key announcements from Google I/O 2014

Due to personal and work commitments I was unable to put in a post covering Google’s I/O announcements at the time, so even though I’m (incredibly) late to the party on this one, here’s my belated list of takeaways from Google’s 2014 event.

Android usage continues to climb

Senior Vice President at Google, Sundar Pichai, said Android now has more than 1 billion active users per month, up from the 538 million figure stated last year.  Historically, Google measured Android usage in terms of total device activations, but now uses a 30 day active statistic.  I think it’s good to consider the latest Android version stats when considering the overall active user number (all below versions of Android support the Google Play Store app):

Data collected during a 7 day period ending on August 12, 2014

  • 2.2 (Froyo) 0.7%
  • 2.3.3 – 2.3.7 (Gingerbread) 13.6%
  • 4.0.3 – 4.0.4 (Ice Cream Sandwich) 10.6%
  • 4.1.x (Jelly Bean) 26.5%
  • 4.2.x (Jelly Bean) 19.8%
  • 4.3 (Jelly Bean) 7.9%
  • 4.4 (KitKat) 20.9%

So, it’s great to see those Android user figures climbing and climbing, but we’re still stuck with that (OS version and device spec) fragmentation issue.  What’s Google trying to do about that?

Android One

The Android One program has been created to provide guidelines for a unified Android experience, specifically with smartphones aimed at developing markets in mind.  The objective for Google is to dictate the minimum hardware requirements that manufacturers must adhere to worldwide if they wish to attain Google’s official ‘Android One’ status.

This is no magic bullet for solving Android’s fragmentation problem, but it is at least a move in the direction of a more coherent Android experience across devices.  Applying methods such as setting a minimum specification should mean a certain level of performance can always be expected.

Saying that, you have to remain current and attractive to consumers too – and the only way Google is going to continue to do that in the smartphone space is by introducing a new, latest version of Android.  Unless Google can force OEMs to use the latest version of Android in new devices (as they were widely rumoured to be considering in February this year) or at least make it easier for OEMs to upgrade the OS of phones (something Android One should help with) then fragmentation is likely to remain a feature of the Android landscape.

It’s the flip side of the coin – you can’t have variety without the troubles that come with making Android work on every permutation of hardware and software that a manufacturer chooses to add to the mix.  This article may be two years old, but if gives you an idea of the numerous hurdles that stand in the way of resolving this issue.  It also asks the important question, “is it really a problem?”.

L Developer Preview

Android’s upcoming L release (the successor to Android KitKat) brings a number of cool items with it, along with the crystallisation of approaches that, again like Android One, focus on bringing a consolidated experience to all devices.

Google’s new approach to a unified cross platform UI, called Material Design, is an aesthetic overhaul concentrating on making content look similar across the myriad screens sizes that now make up the growing Android ecosystem.

To help designers and developers produce simpler, cleaner and more colourful UIs, Material Design is accompanied by a set of tools for making customised typography, grid and colour changes, as well as an updated prototyping tool called Polymer for creating smooth animations running at 60 fps.  Check out Google’s guidelines for building a consistent look and feel with fewer distractions here.

From what I’ve seen of it in action, the focus on object depth/weight and animation does indeed bring the OS to life, but I’ve yet to have a hands on go with it.  It does seem to be going in the same direction as iOS 8 and Windows 8 – you’ll have to decide if that’s a good thing or not for yourself.

It’ll be a few months before you’ll see the final consumer version on devices, but you can check out the developer preview for yourself by following the steps outlined here – as with all betas though bear in mind this is NOT a stable release, so for example don’t install it on your Nexus 5 if it happens to be your main device.  You could end up with apps, or worse a device, that doesn’t work.  You can uninstall it and revert to factory settings, so if you are willing to take the risk at least you know you can reset it if you’ve backed up your content in advance.

Other features being introduced with L:

  • Lockscreen notifications
  • Context-based authentication features (authenticating based on a bluetooth connected Android Wear watch for example)
  • 64-bit compatibility
  • A new Android Extension Pack to support 3D graphics
  • Project Volta – helps developers identify battery discharge patterns to improve overall power consumption and better manage battery life
  • More security, including patches for Google Play services and factory reset protection

From a development perspective there’s also more than 5,000 new APIs for developers to sink their teeth into.  Also worth noting is that the L release leaves behind the Dalvik runtime and completely embraces the new and improved ART (Android Run Time).  While Dalvik ran code at the point it was needed (otherwise known as “Just In Time” or “JIT” compilation) ART uses “Ahead of Time” or “AOT” compilation to process application code before it is needed.  In theory this should mean things run smoother for the user.   Those who are using it on their current 4.4 devices have reported increases in performance and battery life to varying degrees, but expect this to improve further over time.

If you’re interested in running ART on your 4.4 device follow the directions in this article, but be warned some apps may not be compatible with it.  Also code compiled with ART will take longer to install and take up more space on your device.  As is the case with Android L it isn’t considered stable yet, so you use it at your own risk.

Android Wear
Google used this year’s I/O to demonstrate the Android Wear SDK and some of the wearable functionality it provides.  I covered getting started with the SDK in my Android Wear entry back in May, but it was great to see this finally working on some actual devices.  Functionality on show included the touchscreen UI in action (clearly utilising Material Design) and use of the “Ok Google” prompt to create notes, reminders, alarms, calls and more.  Also, when a user installs an app from the Google Play store on their smartphone, a complementary wearable version of the app is installed on the smart watch and synced.

The Google I/O keynote included a demonstration of the Eat24 app for Android Wear, which showcased how users could order pizza from their favourite take away restaurant within 20 seconds, via a simple couple of taps on the watch.  Mmmmm pizza…

Speaking of wearables, no Google Glass love this i/O (well not on stage anyway – there was plenty in the audience) although we do know from previous announcements that Glass has been upgraded to 2GB RAM.

Android for Work

Consumer wise, Android is getting bigger and bigger, but when it comes to business use Android still gives many companies shivers with respect to BYOD (Bring Your Own Device) policies and device management.  Google announced the Android for Work platform (derived from Sansung’s Knox) which will help enterprises deploy to devices in bulk and allow business and personal information to coexist securely on a single device by keeping the data for each separate, all without needing to modify or change existing apps.

Native Microsoft Office editing in Google Docs through integration with Google Drive will also be available, as will a premium level Google Drive plan for enterprise customers that provides unlimited storage for $10 per user per month.

All this comes as part of Android L, but will also be available for Ice Cream Sandwich, Jellybean and KitKat through installation of an app.  Look out for devices with Android for Work certification later this year.

Android Auto

Android Auto is Google’s completely voice-enabled solution for using connected devices in cars. When connecting an Android smartphone to a compatible vehicle, Android Auto will cast it’s UI to the car’s screen (which will incorporate bigger buttons) offering shortcuts to location searches, suggestions, navigation/maps, your music library/music streaming and other information powered by Google Now and other apps.  You’ll also be able to send and receive text messages through voice command features allowing you to keep you attention where it should be, on the road.

It may help you keep your eyes on the road and your hands on the wheel, but I’m still cautious about how safe this will be in practice.  It’s common place for attention to be thrown by the various gadgets we use in various contexts; in the car is one place where you really don’t want it to be distorted in any way.  Time will tell though.  Android Auto supports steering wheel buttons, console dials and the before mentioned touchscreen, so it should be tightly coupled in use, which is encouraging.  Also the idea that the services offered will improve when my apps or phone does (rather than only when I trade in a car) is a pleasant thought, as is having Google Maps fully integrated into my car… maybe I’ll end up being a convert after all.

The Android Auto SDK is being supported by 40 partners, ranging from Hyundai to Porsche.

Android TV

“We’re simply giving TV the same level of attention as phones and tablets have traditionally enjoyed.  We want you to leverage your existing skills and investment in Android and extend them to TV.” said Google’s Dave Burke.

Google’s first stab at TV, Google TV, suffered from a lack of compelling content.  Android TV will let users watch live TV and streamed content in conjunction with integrated search capabilities that can provide things like cast bios/info along with related content on YouTube.  In addition to this Android TV will also leverage what already exists on your Android device, allowing you to cast Play store games and other Android device content/apps to your bigger screen in much the same manner as you can with a Chromecast.  This makes Android TV a much better proposition that Google TV ever was.

Chromecast update

Speaking of Chromecast, you can now mirror your Android device to the television wholesale through this as well (yes even the camera app can be mirrored if you can think of a reason to), bringing it up to speed with what you can do with mirroring via a Chrome browser session or an iOS Apple TV.

Other updates included:

  • No longer needing to be on the same Wi-Fi network to cast content to a Chromecast (which was on my list of wants from the moment I bought one)
  • A Backdrop feature that allows background photos to be pulled from your own library of content, or if that’s not your thing you can display weather or news instead

Google Fit

The open multi OS API, Google Fit, offers a platform similar to Apple’s HealthKit, aggregating all of the fitness data for a user in one place.  The likes of Nike and Adidas are already partnering.

You can get the developer preview here, which gives you access to the following:

  • Sensors API – view available sensor data sources from connected apps and devices
  • Recording API – connect your app and devices to Google Fit
  • History API – access and edit the user’s fitness history

Google Cloud

Google Cloud has been improved with the introduction of a number of new tools that help developers diagnose and improve their systems whilst they continue to run over numerous servers in production:

  • Cloud Save – the new version gives you a simple API for saving, retrieving, and synchronising user data to the cloud and across devices without needing to code up the backend (currently in private beta and will be available for general use soon)
  • Cloud Monitoring – find and fix unusual behavior across your application stack
  • Cloud Trace – visualise and understand time spent by your application for request processing
  • Cloud Debugger – debug your applications in production with effectively no performance overhead

You can read more about it here on Google’s Developer blog.

Google also demoed Cloud Dataflow, which lets you create data pipelines for ingesting large data sets, and announced their acquisition of Appurify, a mobile test automation service.

This year’s Google I/O was very much Android I/O

Android was very much front and centre this year.  Google I/O covered in detail how Android is moving into more and more areas with Android Wear, Android TV and Android Auto, and it gave us a clear indication of how Google intends to ensure a unified approach to the Android UI, regardless of the context within which you are using it, be it from your car, through the TV or from the small screen strapped to your wrist.

Wherever you use Android, Material Design puts the emphasis squarely on great design, pleasing interfaces, and simple ways of accessing functionality.  This is a strong message to be sending out to both consumers and developers (whose jobs should be helped by the introduction of the new APIs and guidelines) and it underlines Google’s intentions for Android to be relevant in every corner of our lives.

WWDC 2014 Keynote showcases an enriched ecosystem and big changes for iOS developers

No new hardware announcements this time around, and no radical re-imagining of the iOS UI like last year, but still plenty of interesting reveals indicating Apple’s aims for the coming months, both from a consumer and a developer perspective.

The features showcased in the WWDC 2014 keynote should have a noticeable positive effect on the experience we get from our Apple devices.  Sure, we’ve seen some of this already, available in one way, shape or form via existing products or other OS; however here we have them better integrated into Apple’s OS ecosystem.  They put them at your fingertips.  Apple’s design effort allows you to take these features for granted and get the most out of your tech.

 

Enriching the Apple ecosystem

Apple started off by showing it’s latest version of OS X, which clearly stated their intention to bring desktop and mobile closer together through seamless integration and a shared style.  This year it was the turn of the Mac to experience a radical change in interface design; although thanks to iOS 7 last year it didn’t feel as jarring this time round, and the overall feel was still very much OS X.

Apple’s answer to Dropbox is the new online file storage service, iCloud Drive, which lets you store and access files virtually from your iPhone, iPad, Mac or even from your Windows based PC.  Handoff takes this a step further, allowing you to work on files seamlessly across devices that are signed into the same iCloud account.  For example start working on a doc on your iOS device whilst traveling, and then automatically pick up from where you left off on your Mac once you reach home or the office.

Although devices talking to each other and sharing docs via the cloud is nothing new, it’s how Apple integrates it into their devices that is interesting – removing the steps you’d normally have to work through when accessing a file.  An Apple device can use Bluetooth to prompt another device within range to pick up the file and continue from where you left off on the original – all without the need to go through directories or links in order to do so, and without the need to use a third party app.  It may be a small thing, but it’s this and other features, such as the new ability to make and receive calls from your Mac when connected to your iPhone nearby, that increases usability and really shows you where Apple is heading; continuity in an ecosystem seamlessly brought together.

 

The improvements coming with iOS 8

One of the coolest updates was to iOS 8’s push notifications, which now allow you to respond to text messages, invites etc. straight from the lock screen or from within the app that you are currently in.  This has been on my wish list for some time.

Apple catches up on the predictive typing technology front with the introduction of QuickType, which works pretty much as you’d expect, learning from your inputs, but additionally it will anticipate what you might type based on the content you are responding to, and learning from on device data (as oppose to Android, which uses Google in the cloud).  You can also now install third party keyboards.

iOS 8 also brings the Messages app up to speed, with the introduction of a number of improvements:

  • You can now name threads and easily add and remove users.
  • Opt to leave a group conversation or use the ‘Do Not Disturb’ mode to stop notifications for various durations.
  • Share locations and send audio or video messages.  A received audio message can be listened to by raising the phone to your ear to listen to it straight from the lock screen.  Audio and video messages will self-destruct after two minutes, Snapchat-style (not Mission Impossible-style, which would just be wrong).

In mail you can now swipe down to access other emails without leaving the email you’re composing.  This is one update I will find hugely useful.

The Spotlight suggestions feature now integrates results from Wikipedia, App Store, iTunes, and more, so that it is no longer restricted to just the content on your device.

Speaking of iTunes, Apple introduced Family Sharing, which allows content purchased by different family members (sharing the same payment card) to be accessible automatically from any of their devices.  Parents can be contacted when a child tries to make a purchase and choose to accept or reject it.  This is another one that will be put to good use in my household.

Finally the Photos app gets some new editing options, and Siri now has Shazam support (that integrates with iTunes) and can be voice activated “OK Google” style (but with the much more appropriate “Hey Siri” command).

 

What’s new for developers?

Heralded as the biggest release for developers since the introduction of the App Store, iOS 8 brings with it over 4,000 new APIs that will increase the functionality readily available to developers.

Apple finally revealed HealthKit during the middle of their keynote, which can collect health data from third party apps and keep track of daily activities.  All of this data is then available through the Health app.  There is also the ability to share this information with the Mayo Clinic (a well respected health organisation that has done a lot to push mobile technology integration in the medical sector) – the Health app alerting your doctor if your condition should necessitate it.

The HomeKit SDK allows third party devices certified as “Made for iPhone/iPad/iPod” to be controlled via the HomeKit app, rather then through separate applications.  These third party devices could be anything: locks, webcams, thermostats, garage doors, lights etc.  and can be grouped together into a set of actions, called “scenes”.  For example a “go to bed” scene could turn down the heating, lock doors and dim lights.  There are already some well known partners involved in this endeavour, such as Philips, Kwikset, Honeywell, iHome and Sylvania.  It will be interesting to see what kind of momentum and interest manifests around this following once iOS 8 is released later this year.

The CloudKit developer framework will provide backend processing services for iOS 8, allowing developers to build cloud-connected apps.  Sounding similar to Amazon Web Services or Microsoft Azure, Apple’s Craig Federighi promised that it would be “free, with limits“. It remains to be seen what this actually means in practice – what costs will there be and is this specifically an Apple platform offering.

Apple claims it will bring “console level” graphics to iPhones and iPads with the introduction of a new 3D API, called Metal, that will allow game developers to take real advantage of Apple’s system on a chip processing power.  With better performance than that given by the use of OpenGL, Federighi boasted draw rates of up to 10 times faster.  Crytek, Epic, EA and Unity are apparently working on demos that demonstrate this.  The Zen Garden app, built with Unreal Engine 4 and demoed during the keynote, will be available free alongside the release of iOS 8, so we’ll all be able to check it out for ourselves.  Additional iOS 8 gaming frameworks, SceneKit and SpriteKit, were also announced and are aimed at making development of 2D games (SpriteKit) and rendering of three-dimensional scenes (SceneKit) easier and more efficient.

After its introduction last year, the Touch ID API will now finally be available to developers for use in their own apps, and I expect that this signals that all iPads and iPhones going forward are likely to have the fingerprint sensor incorporated.  There are a number of apps that spring to mind that I think would benefit greatly from this.

Extensions should prove useful to developers going forward too.  Similar to Android’s use of intents, iOS apps can now offer services to each other in a likewise manner.  Taking it a step further though, not only can you share to other installed apps, you can borrow useful actions and features from them as well.

We even have widgets on iOS now.  Not on our home screen though as with Android, these widgets sit a step away within the notification centre.

Worth mentioning are the iOS 8 changes to WebKit.  It has always been frustrating not receiving the same level of performance from in app webviews as you do from the Safari app in iOS.  As of iOS 8 though, all apps will be able to use the same JavaScript engine that is used in Safari.  Everything from Google’s Chrome app to the in-app browser pop-ups seen in Facebook etc., should now be as fast as Safari.  This is fantastic news, especially with hybrid apps becoming increasingly popular as a mobile solution.

The final reveal was a big one from a development perspective.  Apple presented its own programming language called Swift, which will eventually replace Objective C.  Apple says Swift is faster than Objective C, but many will want to spend some time with it and see what others achieve before they believe that.

Swift will be able to work alongside Objective C in the same app, happily working with Cocoa and Cocoa Touch, so there’s no need to sink wholesale into Swift right away. I expect with many developers out there experienced in Objective C and many existing apps in the App store written in it, it will be a while before we see mass migration to Swift.  You can download Apple’s free introduction to the Swift programming language from the iBooks Store.  To use Apple’s own words:

Swift eliminates entire classes of unsafe code. Variables are always initialized before use, arrays and integers are checked for overflow, and memory is managed automatically. Syntax is tuned to make it easy to define your intent — for example, simple three-character keywords define a variable (var) or constant (let).

Even though Swift uses the LLVM compiler to convert its code into optimised native code, you can use “Playgrounds” to type code in and see the results straight away:

Type a line of code and the result appears immediately. If your code runs over time, for instance through a loop, you can watch its progress in the timeline assistant… When you’ve perfected your code in the playground, simply move that code into your project.

You can find links to Swift resources here.

 

And that’s that for another year

So, plenty to get excited about.  Many will argue that a number of the reveals were things we’ve already seen to varying capacities elsewhere, but it doesn’t take away from the polish and level of integration with which Apple executes its own versions.  And in some cases Apple is no longer playing catch up, providing products and services that go one step beyond those offered by other operating systems and app providers.  On top of that, some announcements, such as Swift, are without doubt truly unique and important, and we’ll be feeling the consequences of those choices for some time to come.  By way of a comparison, it will be very interesting to see what comes out of Google I/O later this month.  Stay tuned…

 

Taking a closer look at Android Wear and what it does for the smartwatch

I finally found some time this week to take a look at Google’s Android Wear; an extension of the Android OS designed specifically for wearable items.  Right now that obviously means smartwatches, but in the future it could be utilised by any number of items or garments.

Many developers and many mobile brands are already working on utilising it.  The LG G Watch and Moto 360 are the main devices spearheading the use of Android Wear, doing their upmost to make smartwatches meaningful and appealing.  When you look at these, especially the Moto 360, it feels like the design aestectic has finally caught up with the engineering impetus.

I’m by no means saying that Android Wear answers the “why?” of owning a smartwatch.  It doesn’t suddenly make a smartwatch the be all and end all interface for your daily life – it does however finally give us a standardised way of developing for the smartwatch paradigm, and it gives us a design that is the nicest and most usable that we have seen, so far, for the smartwatch context.

For those that want to dive deeper into how Android Wear will tick (pun intended) the preview SDK is now available for download.  It allows developers to start working with the Android Wear watch emulator in conjunction with an Android phone, to see what the experience will be like once the real devices are here.  After you sign up for the preview you can download and install the components and play with the included example apps.  It’s a great way to explore what will be involved in the development of apps that will support Android Wear devices.

There’s already a good guide to the preview that you can follow here, so I won’t repeat it in detail, but in short follow these steps to get started:

  1. Sign up for the preview SDK.  Once approved you’ll have access to the Android Wear Preview app that (once downloaded and installed) links a device to an Android Wear smartwatch, handling notifications and responses and allowing you to act upon them.
  2. Install the Android SDK (if not already installed) and then the Android Wear system image via it.
  3. Create an Android Virtual Device (AVD) with the Android Wear configuration (round or square) to emulate the smartwatch.
  4. Attach the device running the Android Wear Preview app (downloaded and installed in step 2) to your computer and then execute an adb command to set up communication between the device and emulator.
  5. Once the above is done, the emulator will start displaying any notifications your phone receives from apps and you can spend time seeing how you’d interact with them on a smartwatch.  If you want to see more you also get 3 example apps included in the preview SDK download: ElizaChat, RecipeAssistant, and WearableNotificationsSample.  They’re a good starting point for getting to grips with the code itself.

This link covers steps 3, 4 and 5 above in detail and is very useful.  If you’re going to play with the example apps pay particular attention to the section that covers adding the support libraries (android-support-v4.jar and wearable-preview-support.jar) to any projects you’re going to run (your own or the example ones) and make sure you don’t forget to do so.

I did have one particular issue I experienced whilst importing and setting up the example apps via Eclipse, which wasn’t covered in the above steps or links.  I may have tripped over this because I’m not an everyday Android developer, but regardless of that I found the solution here on Stack Overflow.  In a nutshell, make sure you set the Java folder as the source by right clicking on the Java folder in the project and selecting Build Path -> Use as Source Folder.  Once I did this everything worked fine.  As long as you’ve done this the rest of the setup should be straightforward and you’ll be able to explore the example apps with ease.

The following links are also worth looking at and give further direction and detail for design practices and implementing notifications on Android Wear devices:

I’m really excited to see what people will do with Android Wear.  It may not be ground breaking, but there are interesting use cases coming out of early experimentation by developers – be sure to check out the Android Wear Developers Google+ community, which is very much alive and active right now with people getting to grips with the Android Wear SDK.

As anyone who has read my blog entry on APIs will know, I’m very interested in smartwatches.  Being of a certain age they give me a warm nostalgic feeling inside, but to date no device or approach has piqued my curiosity enough to make me want to spend my money on one.  With the advent of Android Wear though, I think that might be about to change.

Thinking about mobile UX and UI

Great design and a great experience are supremely important when it comes to mobile app development.  No matter how great that underlying functionality is, if it isn’t obvious to a person how the app works, how they get to what they want, then your app will die lonely and unused.  People expect the hurdle to entry to be incredibly low; if they can’t open the app and know what to do within seconds of it starting, it will be closed down at best and at worse uninstalled.

 

Ask the important questions first

There are lots of questions you should be asking up front when you develop an app, and a number of these should be asked even before you start putting together your first wireframes and user flows.

You really want to take time up front to get into the user’s head and determine their objectives in using the app.  Understand the needs you’re trying to facilitate without specifying the features required to meet those needs – leave transposing goals into features until later on, as doing so early can stifle ideas and lead you down a particular functional solution too soon.

Think about what the goals are in terms of real people.  Using personas (a persona is a made up person that represents the motivations, frustrations etc. of one of your user groups) can assist in crystallising your understanding of a target audience and will help in communicating it to all involved in the project.  Personas can also help you to focus on developing features that your customers will want rather than those YOU want.  Always keep front of mind that you’re asking the question “Should this product be built” and not “Can this product be built” (a key question when using the lean startup methodology).  Is this app/update/service really something that people want and is the business case for it viable going forward?

 

You have the green light – let’s make a great mobile experience

Once these questions are answered you can move on to dealing specifically with the mobile experience and making it the best it can be.  This opens the door to a new batch of questions.  For example:

  • What is it you want to do/provide with your app?
  • What goal does the a user want to achieve by using your app?
  • Is it obvious on first opening the app?
  • Is it designed in a way that gets the primary use front of shop?
  • Are you in new territory or are you uniquely trying to meet a need that similar apps have not met?
  • What will define a successful outcome for the user?

It’s important to keep in mind how different the experience is when using a mobile device compared to a PC.  Many of the principles that work on a PC just don’t translate to a mobile.  You’re dealing with different kinds of input and different real estate in terms of the screen size.

The different technology employed by mobile devices demands that you keep things simple and focused.  Keeping necessary user input as minimal as possible is key, as no one likes a mountain of taps or a ton of typing if it isn’t integral to the job in hand.  Keep in mind that the simpler your app is to use the easier it is to use on the move as well, which is a major context within which app usage occurs.

Beyond use on the go, you need to think about all the other ways in which mobiles are used as well, and the connotations that come attached with these:

  • It’s a very personal device, so your app will exist squarely within the user’s personal space.  This can have quite an effect on the relationship a person has with a product or service (when compared to interactions with it on a PC).
  • The ‘always on, always within reach’ nature of a phone also means there are new opportunities to interact with users that weren’t available on PCs, so again the easier you can make it to interact with your app the better in order to work on those user impulses whenever, wherever they surface.
  • Also bear in mind that although the mobile exists within a very personal space, it is very often used within a very public one, so always keep in mind that the user’s experience could be interrupted at any moment – design and develop your app to be disrupted mid flow, save progress for easy continuation at a later point and provide a straightforward way to undo items if that makes sense to your user flow too.

Not only are there the differences between PC and mobile, but also differences between one mobile platform and another, and even between handsets using the same platform.  Look at your analytics and know the handsets that your customers are on – concentrate your efforts on these before expanding out.  Know the processing power each one has and know the OS revisions you’re aiming at.  You can then put your UX and UI effort into designing the app to be the very best it can be on each one.

 

Keep it simple, keep it consistent

In most cases you want to do as Apple say; keep it simple.  Keep the app focused on a core objective/function.  If there are a number of tasks that can be done via the app then at least make sure that the primary use is made the focus as part of your design approach.

Additionally keep the design and behaviour consistent.  Follow Apple’s HIG and Android’s design guidelines for the common established behaviours, so that a person can take for granted the basic interactions and thus feel at home straight away.  Where your app necessitates new and interesting ways of facilitating interaction make sure they are simple to understand and guidance is given (when absolutely necessary).

One of the most important things you can do is enable a person to build a clear mental model of how to navigate around your service.  By making your app consistent and predictable you allow a user to quickly construct their own mental model of the app’s functionality, which will continue to inform their use of it going forward.  Any deviation from the expected for a user can grate like hell.

 

I want you to tell all your friends about me…

Finally, encourage ownership in your app.  On top of a great user experience, if you can strengthen that sense of ownership in your app, that is an additional way to boost continual use and viral advertising.  Make sure you have those social hooks in there – allow people to share content with friends/colleagues, provide easy integration with other apps, and allow users to customise their experience where it makes sense.  Allow them to spread the word about how amazing your mobile app really is.

Going cross platform on mobile, part three: Cross platform development tools

There are many tool options available for facilitating cross platform development.  Which one you pick will depend on what skills and resources you have available to you and what kind of app you intend to develop.

Even if you’re just starting out in mobile development and are looking to adopt one of these tools from the outset, it is still worth considering taking the time to at least develop a simple app natively in both Android and iOS.  Why?  Firstly it will make it easier to understand what is going on under the abstraction layer that a cross platform mobile development framework provides.  Secondly it will also help you determine how much value a cross platform solution will offer you, in terms of development effort and the number of platforms you’re aiming to target.

App factory drag and drop “no coding required” solutions will provide efficiency through speed of upfront development, and may be the right decision for you if all you desire is a quick to market newsfeed app, a simple POC or something else of that ilk.  On the other hand cross platform IDE “code once, deploy many” approaches accommodate and require a more complex code-oriented development process, but are able to produce much more sophisticated products and will provide the return on investment once you begin to deploy over multiple platforms.  Sometimes, despite all the alternatives on offer, it may be that a native approach is still the better option after all.

In all cases you need to know your development and deployment strategy up front in order to choose accordingly.  Also remember to test your work on devices to get the truest reflection of performance, as it really is the only way you’re going to know for sure how your app will perform on the platforms you’ve targeted.

 

More choices than you can shake a smartphone at

As mobile app development matures, a mobile presence is becoming a necessity in more and more business sectors.  With that comes the increased need for cross platform solutions that meet a variety of needs, allowing a variety of apps to be developed in more cost efficient ways.

Truth be known there are now a lot of tools available for cross platform development, and they cover the whole spectrum of needs; tools that assist in the development of native apps based on a single source of code, tools for developing apps that leverage well known web technology skills, and we also have tools for non developers that allow quick creation of web and native apps in very, very short timeframes (these are functional, but don’t expect to win any awards for your app if you go down this particular path).

The chart below from research2guidance’s report on cross platform tools gives you a good idea of what’s available, and additionally orders them into developer awareness based on the findings of their survey:

Chart-for-blog-post

*Note – due to extremely low awareness, more than 50 tools included in the survey weren’t included in the chart.

Plenty to choose from then.  You can see that Adobe Air, Phonegap (Cordova), Xamarin and Unity 3D are the most well known, although I would add Titanium, Sencha Touch and Marmalade as the other main ones out of the list that I’ve come into contact with.

The survey of developers also highlighted the following opinions and feedback on cross platform development solutions (if interested you can download the full report directly from the research2guidance site):

  • Only 11 cross platform tools (out of the 90 cross platform tools identified by the report) are known by more than 20% of the app community surveyed.
  • For the majority of developers already using cross platform tools, these environments have become their primary development platform.  63% of cross platform tool users develop more than 50% of their apps using a tool.
  • Saving time is one of the main benefits of cross platform tools.  Up to 75% of users have indicated that they can reduce app development time by more than 40%.  The time saved increased with the number of platforms being targeted for deployment (peaking at 5-6 platforms).
  • Overall, cross platform tools are rated well by the developers that use them.  A high rating was indicated for platform coverage (83%), availability of pre-installed apps (57%), API cloud service (52%), access to device hardware features (64%) and support (63%).  The overall cost-performance of cross platform tools is rated by 85% of the users as high or very high.
  • However it was clear that app performance is seen as the main weakness of cross platform tools.  50% of all developers rate the performance of the apps that are being developed by cross platform tools as considerably lower than that of their native counterparts.
  • Despite the high user satisfaction with these tools and positive feedback indicated by the report, less than 5% of all apps available in today’s leading apps stores are being developed with the help of cross platform tools, suggesting the need for vendors to increase awareness amongst those that have not tried cross platform tools yet.
  • Corona and JQuery Mobile were rated to be the tools of lowest complexity, whilst Titanium and Marmalade were rated to be the highest in complexity.
  • In the benchmarking, users of Unity 3D and Xamarin could obtain the highest time-savings – in contrast, the lowest time savings were obtained by Marmalade and Titanium users.
  • The tools with the highest app quality were Xamarin, Unity 3D and Marmalade.

For me the above highlights the increasing desire for efficient cross platform development solutions.  It also indicates that, when deciding which tool is the right one for you, you need to be definite about what kind of app you’re developing and be clear on the development skills that you have to hand.  This will help you pick the right tool for the product you have in mind and avoid disappointment associated with unexpected tool complexity and below expectation performance.  Check out as many sources of information and opinion as you can in order to shortlist your options, and if you have the time available to trial run a selection then even better.

 

Information on three to start you off

Below is an overview of the three cross platform development frameworks we’ve looked at most recently.  They provide links to further reading and should give you a good starting place for your research:

Appcelerator Titanium

Appcelerator Titanium is an open, extensible development environment for creating native apps across different mobile devices and operating systems including iOS, Android, and BlackBerry, as well as hybrid and HTML5.  It includes an open source SDK with over 5,000 device and mobile operating system APIs, Studio – a powerful Eclipse-based IDE, Alloy – an MVC framework, and Cloud Services for a ready-to-use mobile backend.

There is a free Developer tier, and an Enterprise level (with Public Cloud and Virtual Private Cloud options) that offers more features – price on request (http://www.appcelerator.com/plans-pricing/).

Appcelerator Titanium allows you to build native mobile apps from a single code base using web development languages (HTML, JavaScript, CSS, Python and Ruby), removing the need to manage multiple developer toolkits, languages and methodologies.  App data can be stored in the cloud or on the device, and apps can take full advantage of hardware, e.g. camera and video capability.  Appcelerator apps go through a compilation and optimisation process that results in apps that should look, feel, and perform like native apps (as you are using the native UI components).  You use a cross platform mobile development custom API to build the app. This approach is different to PhoneGap or Xamarin:

  • With Xamarin you use a wrapper around the real native SDKs
  • With PhoneGap you use whatever you want to build an HTML5 web application.

With Titanium you write all of the code against their SDK – that includes UI components as well.  This means that when you write a Titanium application you can actually write a cross platform user interface as well, and the app is compiled down to a completely native application that uses the real native controls for the platform in question.

In reality though it’s not possible to completely build a cross platform app with a 100% code reuse because not everything has an equivalent across all platforms, e.g. iOS has the Navigation Controller that tracks what screens have been navigated through, but Android doesn’t.  So, some of your code needs to be conditional – you’re going to need code in parts of your app tailored to the specific platforms you’re supporting in order to get the very best results.  However, even at 70-80% code reuse the benefits are still apparent to Marijn Deurloo, CEO imagine:

“About 70-80% of our code can be reused across apps. That saves us a lot of time and a lot of skill.  It’s very hard to find different native skills and combine them in one team, but it is comparatively easy to educate people in JavaScript.”

Titanium has an MVC framework called Alloy that simplifies creating Titanium applications and removes the need to programmatically create all of the user interfaces.  You declare your user interface using an XML markup, then use controller classes to populate and interact with the UI.  There is also a style sheet approach that is similar to CSS.  The build process will also let you build a web application out of the same codebase.

An API repository marketplace offers the ability to buy and sell code to extend Titanium and there is an active developer community Q&A site for Titanium developers (http://developer.appcelerator.com/questions/newest) similar in tone to StackOverflow.

Finally, Titanium’s cloud offering lets you have access to their backend cloud services that allow you to create Facebook-like functionality without having to code your own backend.  The cloud services can be used to manage, authenticate and store data about users (e.g. social graphs, key value pairs etc.).

Book-wise, check out Appcelerator Titanium: Patterns and Best Practices

Work is currently being done on their next generation of Titanium called Ti.Next, which looks to improve performance and ease of development, as well as bring the IDE up to date.  Check out Jeff Haynie’s Thoughts on Ti.Next from July 2013.

PhoneGap (Cordova)

PhoneGap is a free and open source framework of JavaScript APIs that allows you to build web applications that are locally installed to the device with access to the native capabilities.

When you build an application using PhoneGap, you are essentially building a mobile web site for the device using HTML, CSS and JavaScript.  All layout rendering is done via web views instead of the platform’s native UI framework, but the resulting app is packaged for distribution like a native app and has access to the native device APIs.  From version 1.9 onward it became possible to freely mix native and hybrid code snippets.

The following chart shows the native libraries that are available to you through the PhoneGap JavaScript APIs:

pgfeat

http://phonegap.com/about/feature/

You develop a PhoneGap application, for the most part, like you would a cross platform mobile web site.  You can use any mobile framework you like, for example Sencha Touch, JQuery Mobile, etc. and have the potential to share just about all of your code across all target platforms since your application will be in HTML and JavaScript.  Bear in mind though that you will not be writing a native application in any sense of the word.  As your PhoneGap application runs in a web view it will be more like a web application than a native application.  The user interface you design will not use the native controls and will be subject to the limits and speed of the web view, which will not necessarily provide you with performance that matches the device’s web browser let alone native technology.

This means that you might have to write some platform specific code to make up for differences between the browsers, taking a hybrid approach to mix HTML with native elements to make the most of each (high interoperability and high functionality where it counts the most in the app).  You can basically assume though, that you will be able to share most of the code where beneficial.

Whatever environment you want to build in, you should be able to find a plugin for your IDE of choice.  There is also a facility called PhoneGap Build that allows you to upload your project, whatever environment you created it in, and build it automatically for other platforms.

Books:

Xamarin Tools

Unlike Titanium and PhoneGap, Xamarin is not free to use.  Companies or incorporated entities with more that 5 employees must purchase a Business or Enterprise plan: Business $999 per platform per developer, Enterprise $1899 per platform per developer

Xamarin tools allow you to develop an Android or iOS application with C#, and share a good amount of the code by using an abstraction on top of the real SDKs for iOS and Android (Xamarin.iOS and Xamarin.Android).  To get the most benefit it is good to have some knowledge of the iOS and Android platforms’ APIs.  Strictly speaking it does not provide a cross platform UI library, since each supported platform has its own different set of C# UI libraries with their own capabilities.  However, approaching software development using the MVC or MVVM patterns will allow you to write Model and Controller components that can be shared across all the platforms you’re supporting without any changes, leaving only the View component to be re-written for each platform.  With this approach it’s potentially possible to achieve 80-90% code reuse.

Xamarin has its own cross platform IDE called Xamarin Studio.  A recently introduced component store makes it easy to find reusable components directly from Xamarin Studio and plug them into your application.  Alternatively there is a plugin for Visual Studio.  You can even develop an iOS application from Visual Studio, but you will still need a Mac to perform the build (it uses a remote call to the Mac to perform the build).  As Microsoft and Xamarin continue to strengthen their integration you can expect Visual Studio support to continue improving.

If your in house team works predominantly with .Net/C#, then Xamarin offers you a means of leveraging that team’s existing knowledge (and codebase) of C# for development of iOS, Android and browser apps (and of course Windows Phone apps too).  Via the .NET CLR implementation, code can be compiled into modules of bytecode that are binary compatible across all of the supported platforms.

Book-wise you could consider Mobile Development with C#: Building Native iOS, Android, and Windows Phone Applications for a short intro to the topic, and Developing C# Apps for iPhone and iPad using MonoTouch: iOS Apps Development for .NET Developers, which is likely outdated in some aspects, but is still considered a good read by some I’ve spoken to.

 

The choice of approach is yours, but the goal is always the same

I hope you’ve found this three parter useful.  The overarching message is straightforward, even if it is not absolute: there is no right or wrong approach to mobile app development, as long as you put the needs of your product first.

Whatever solution you use for delivering your mobile presence, whichever tech stack best suits your needs and product, keep these core goals in mind: always build with the user in mind, build for speed and build for ease of use.  Build something you’re proud of.

Right now this is even more important when designing for mobile platforms using web centric technologies.  Worry less about mimicking native functioning and instead really focus on producing great UX and UI.  It should sit comfortably and function efficiently on mobile platforms in terms of processing power and screen real estate.  Until web technology can replicate native look and feel wholesale users will feel more comfortable with an experience that isn’t trying to deceive them – better to have an app centred on doing the job in hand efficiently and well, than to expend resource on pretending it’s something it isn’t.

Ideally if you’re developing mobile apps, taking the time to try some of the tools available for cross platform development is going to be the best thing you can do.  It will give you the first hand experience to clearly determine which, if any, fit the requirements of your upcoming projects.

Armed with knowledge of the market, knowledge of the types of app you can build and knowledge of the tools you can use to do so, you can make the right choices when it comes to building that next killer app.

Going cross platform on mobile, part two: Native, web and hybrid apps

So if we decide, having looked at the market and our product’s objectives, that we need to move beyond a single platform reach model, then we need to look at how we do this in the most efficient and effective way.

You can go native, using the primary programming language for each platform, but it’s going to need dedicated skills for each and of course the time to code for each mobile OS you want to have a presence on.  The alternative is to use well established and understood web technologies for product development, or to use a mixture of the two to create a hybrid approach with the aim of gaining the best of both approaches.  There are also frameworks that allow you to develop for cross platform delivery in one particular language and cross compile as necessary.  We mention these briefly below, but will cover the concept in more detail in part three along with frameworks that use web technologies as well.

In simple terms we can classify the three approaches to mobile app development as follows:

Native app

  • An app installed on a device (usually from an app store, but not necessarily) written in the ’native’ code of the platform/device.  It’s generally accepted that the UI of native apps does not consist of full-screen web page views alone – in this sense a web site ‘wrapped’ in native code in order for it to be distributable via Google Play, the App Store etc. does not constitute a native app.

Web app

  • Web apps are developed using standard web technologies to create a ‘web site’ that attempts to imitate the look and feel of a native app, or at least is designed to look and feel comfortable within the confines of a ‘mobile’ device i.e. it is not just a web page designed for desktop simply shrunk to show on a mobile phone.
  • When is a web app a web app and not a (mobile) web site?  It’s arguable, but for me in most cases it suffices to say that a web app follows the same model, and is used in the same way, as a native app.  The UI is structured along similar lines as an app and is focused upon a specific task.  That it might be consumed via the device’s browser rather than installed through an app store is not seen as making the defining difference (it might make a difference in terms of performance, but that’s not the user’s concern).
  • Also a web site may use responsive web design approaches and mobile best practices to improve the look and consumption of the content on a mobile device, but if it isn’t following an app style user model it can again be argued that this is a web site and not a web app.  As you can tell the difference can be a bit of a grey area.

Hybrid

  • Hybrid apps are an instance of a native app, utilising web and native tech/functionality to varying degrees (e.g to access the device’s camera or to provide the ability to receive push notifications).
  • Most people probably understand a hybrid app to be a similar proposition to a mobile web app, but packaged as a native app with access to native services (à la Phonegap/Cordova apps).  They either try to mimic native look and feel or at least are (hopefully) designed to look and function comfortably within the confines of the app use case.
  • However, hybrid apps sit on a spectrum; ranging from simply being mobile web products ‘wrapped’ in an app (offering little if any native functionality to differentiate them from mobile web) through to hybrid apps that are almost fully native.

Cross compiled apps

  • Cross compiled apps are another approach that need to be mentioned.  Frameworks in this category allow you to write code in web technologies and/or a selection of programming languages, which can then be compiled down to native code for specific platforms.  In theory they should provide closer to native performance than web apps do, but feedback from developers is varied and likely dependant on what is required from the app.

The above are touch points on a spectrum of possibilities, where native functionality can be mixed with web centric technology to varying degrees resulting in varying strengths and weaknesses.  The mix you choose for your product will determine the level of interoperability you have verses the level of functionality:

IntVsFunct

from: http://designmind.frogdesign.com/blog/unraveling-html5-vs-native.html

Or to show it another way:

Nathtmlhyb

from: http://wiki.developerforce.com/page/Native,_HTML5,_or_Hybrid:_Understanding_Your_Mobile_Application_Development_Options

There are many sites that list the pros and cons of these approaches in detail, but in overview you’ll find the following to be true:

  • Native will always be faster, and especially so for richer and more involved experiences, however the drawback is you have to redevelop your product for each platform and you need access to skilled developers as required
  • It’s good to remember that expectations are different when people open an app compared to when they browse a site – native is on balance more capable of delivering the functionality that’s expected from an app, and so more focus will be needed to meet user expectations when developing apps via non native frameworks
  • Mobile web apps use known technologies, offer greater cross platform reach and promise ‘write once, deploy many’ capabilities, however the truth is that the differences between devices, their browsers and OS means you can not guarantee how well they will perform across all permutations – pure web apps do not have access to native functionality and we’re still waiting on standards to be set by W3C.
  • Hybrid apps potentially offer the best of both worlds, utilising native code to leverage native performance and functionality, whilst creating other elements in web technologies to gain the benefit of known tech, quicker updating/maintenance (web elements can be updated within the app without the need to resubmit the entire app to the app store), and easier portability
  • As previously mentioned, hybrid apps exist on a spectrum going from highly web app oriented to heavily native, depending on the needs of the product – there are also a number of frameworks that allow web code to communicate with and use native functionality, but these perform to varying degrees and due diligence is needed to confirm if the framework delivers sufficiently for your particular needs.

There is no right or wrong approach, but right now you have more chance of seeing a badly performing web tech based app than a native one.  If you’re going to leverage web or go hybrid make sure you do what you can to make your solution as performant as possible.  I have read a number of studies during 2013 that indicated the majority of people were disappointed with their experience of browsing web on mobile devices and would use their smartphones more if the browsing experience improved.  64% of smartphone users expected websites to load in 4 seconds (compuware, April 2013).  Download speeds are the biggest factor in achieving this – to hit the 4 second load time, based on general UK speeds right now, a website should aim to be a maximum of 1MB for 3G users and 3MB for 4G users. However due to network latency, smartphone memory, cache and CPU, in reality the download size needs to be less to make up for all of these potential bottlenecks.

Below are some best practices for making web performant.  These especially apply to mobile:

  • Reduce dependencies/HTTP requests, image dimensions, and client side processing whenever you can
  • Use CSS3 instead of images where possible, sprite your images using CSS or transfer your images using a data URI scheme
  • Minify your code
  • Eliminate redirects
  • Load contents lazily and don’t load data that will never be seen or used by the user
  • Utilise a mobile content first policy for your web content or create pages specifically for mobile use alone
  • Plan for the lowest common denominator if you’re looking to reach many users – alternatively focused on a specific set of devices that represent your audience and focus on their capabilities alone
  • JavaScript on devices with slow processors can be expensive to execute, so it is important to make sure your client-side code is lean, mean, and uses minimal memory too, as well as implementing network-based optimisations
  • Policies and standards for web technologies using advanced features like geo-location, camera integration etc. are still developing and the implementation of HTML5 is far from uniform – it varies from browser to browser and from mobile platform to mobile platform, so be cautious of any promises of “one size fits all” tools and perform some proof of performance tests up front:
  • The Nitro JS engine used in mobile Safari is not available to the UIWebView, which is used to show web content within apps on Apple devices, so performance will be worse – test how (the lack of) JIT compilation affects your app’s performance by building a test app, or alternatively by installing Google Chrome for iOS, as it uses the same UIWebView component and is similarly compromised when it comes to JIT compilation
  • Similarly the same browser engine isn’t used across all Android devices and so performance can differ in this way too
  • Write efficient JavaScript code that doesn’t block the UI thread and follow language optimisations and best practices for it

Lots of scope for deep diving on this subject, but that would turn this blog entry into a book.  Instead the below links should provide some food for thought on the subject of mobile app/web development and act as a springboard to further investigation:

If you take away one thought from the above let it be this: there isn’t one ring to rule them all… not yet anyway.  Right now we still need to take each mobile project on its own merits and decide the best approach for it.  The choice you make will be dependant on the product you want to create, the skills you have to hand and the other business factors that influence your efforts on a daily basis.

That said there is still a trend for developers to be moving towards HTML5 and other cross platform facilitating frameworks, and understandably so when we need to make our development efforts as efficient as possible, but it is by no means a clear cut decision yet and the options available must be explored.   Take a look at this article from VenturBeat on what developers felt at the end of last year.

In part three we’ll wrap things up by taking a look at three available frameworks that offer an alternative to native development for going cross platform on mobile.