Category Archives: wwdc

WWDC and Google I/O 2015

I normally pull together my list of highlights from Google’s and Apple’s major events a lot sooner than this, but this year neither had the immediate big pull or wow factor moments that the last couple of years did. Both events felt much more incremental in what they presented, with few surprises. Instead we saw expected advancements for services that were already in place, and OS evolutions of features that were already provided elsewhere, either by third parties or by rival platforms.

As such this year felt very much about updates, improvements and honing. While these, perhaps unfairly, don’t always have the same immediate impact as big centre ring reveals do, they do arguably impact on the lay of the land just as much, albeit over a longer, more drawn-out period of time. So, here is the list of announcements that I’ll be following the progress of through the rest of 2015 into 2016.

WWDC

Apple Pay
Obviously this is a big one even though it was known about already, and especially for me because they announced the UK date. It has been a bit of a patchy start for the UK, but the list of participating banks now includes (as of 7th August):

  • American Express
  • first direct
  • HSBC
  • mbna
  • Nationwide
  • NatWest
  • Royal Bank of Scotland
  • Santander
  • Ulster Bank

Bank of Scotland, Halifax, Lloyds Bank, M&S Bank and TSB are listed as coming soon. No sign of Barclays in the UK although they have been quoted as saying it will be supported ‘in future’.

Check out this link for further details on the Apple Pay UK launch. For help with setting Apple Pay up see here.

As convenient as Apple Pay can be, be aware of (not so) edge case failures, such as if your battery dies whilst you’re on the tube – you could find yourself being charged £8.80 for a single trip.

iPad split screen multi tasking in iOS 9
We’ve been expecting this to come for a while, given Microsoft’s support of it on their tablets and how well that works, so it was good to finally see this arrive at WWDC 2015.

SlideOver, Split View, and picture-in-picture are the three flavours of advance multitasking that Apple will finally add to their mobile OS. These are supported by a new task switcher that you access by double tapping the home button.

Slide Over allows you to swipe in an app from the right, which will then take up a third of the view whilst pausing the main app you were currently using. So no, it isn’t true multitasking, but it is a quick and convenient way of accessing apps that are developed to support this functionality, whilst not coming completely away from the main app that you are using. If multiple apps are available that support this feature you can change the app you’re pulling in by swiping down from the top of the screen to scroll through the selection. You will need at least an iPad Air or iPad mini 2 to use this feature and the new Picture in Picture functionality (which allows you to watch a video or take a FaceTime call whilst continuing to utilise another app).

If you have an iPad Air 2, then you can also use the new Split View feature, which does allow true multitasking. This is accessed by pulling a Slide Over view even further across the screen in order to take up half of it. Split View allows you to use each app independently of each other at the same time, dragging and dropping between them as required, and using a four finger swipe in order to change what app is in each half.

Apple’s new search features
Apple’s Spotlight and Siri search functionality will be benefitting from the introduction of ‘natural language’ technology, deeper linking into apps, and more immediate, pushed based, context sensitive search results.

With natural language functionality you can speak a sentence in a much more comfortable way and Siri should be able to discern what you mean and come back with appropriate results.

With deep linking, developers will be able to index content within their apps and websites so that it is then eligible for inclusion in search results pulled together by iOS 9. For example this might include top stories from a website, or recipes from a cooking app etc. – whatever your content or service may provide, if you can index it it can be included for consideration in searches.

The new Proactive Search functionality means your search view will be context aware, pre-populating itself with content based upon key factors – the entries in your contacts list, what apps you have installed (along with an awareness of when they are commonly used) and your current location (in conjunction with the Maps app). Used together, these will all contribute towards generating meaningful information and suggestions as you need them.

With regards to development check out Apple’s App Search Programming Guide. Also worth a read is this article, which covers difficulties found with setting up universal linking in the recent iOS Betas (universal linking allows either an installed app or website to be linked to as required).

App thinning
With the help of the App Store and OS, developers will now be able to optimise the installation of iOS and watchOS apps, with the deliverable tailored specifically to the capabilities of a user’s device, thus ensuring a minimal footprint in each case regardless of the spec of the phone or tablet or watch. It should ensure that apps are optimised to use the most features possible per device installed on, whilst also occupying the minimum disk space necessary. Faster more tailored downloads and more space for apps on your device can only mean a better user experience.

See Apple’s app thinning guide for how slicing (creating and delivering variants of the app bundle for different target devices), bitcode (an intermediate representation of a compiled program that can be used to re-optimise your app binary in the future without the need to submit a new version of your app) and on-demand resources (resources such as images and sounds that the App Store can host and manage the timely download of for you) can be used to achieve faster downloads and initially smaller app sizes that will improve the download and first-time launch experiences of users.

watchOS 2
Version two or Apple’s watch OS brings a number of updates and additions with it that make the Apple wearable proposition even more interesting.

As with Android Wear, the Apple Watch previously ran the apps on the phone with the device on your wrist, in most cases, simply acting as a display. Now we have the ability to run apps directly on the watch itself. It should mean less lag in some situations and the possibility of further innovative uses for the watch, but it remains to be seen how the battery will hold up under prolonged use of apps installed directly onto the device and not onto the phone – experience with the betas so far is mixed to say the least, so we shall see. Native apps will be able to access the watch’s microphone, speaker and accelerometer. They will also be able to utilise the digital crown control for custom UI elements – all things developers couldn’t do with the first version of watchOS (as third-party apps could only run on the phone and communicate with the watch without the ability to utilise the watch’s hardware).

Developers will also be able to create custom Watch complications now. A complication is a custom watch face designed to display a combination of useful (or perhaps not so useful) chunks of data. When designed well, a complication can do a great job of surfacing data and making it easy for a user to take in at a glance.

Apple are in fact opening up a lot to developers now with watchOS 2: video playback, contacts, the Taptic Engine, HomeKit, and more. It’s like finally being given the keys to the candy store. However, it remains to be seen if developers, given this new level of freedom, can create more meaningful, more responsive and more useful apps than some of those that we have seen up to now. I’m sure it’s only a matter of time (no pun intended).

Apple News
Apple News replaces Newsstand and will work in a similar way to Flipboard, allowing users to customise the way items are displayed. It will keep track of topics and news services that are of interest to the you, ensuring relevant articles are collected and displayed. A number of known publishers have already signed up for launch.

Will Apple News kill off the likes of Clipboard and Pulse? On the surface it’s not particularly different from the rest. It has a head start though in terms of being built into the OS, so if it lives up to expectations will people go else where for their news service when they can get it here? It is interesting to note that Apple are intending to curate their content, looking for editors who can help them identify and deliver the best content for readers. Flipboard too, is highlighting their use of curation and managed content in the wake of Apple’s announcement.

Apple News is available across all devices, but like Flipboard, really looks best on the iPad.

Google I/O 2015

Android Pay
Google Wallet didn’t pick up steam as hoped, so will Android Pay get it right this time around? Available to phones running KitKat or higher, it will provide the user with the means to make purchases in apps and through the tapping of NFC sensors, and if your phone has a fingerprint sensor (see below) you can use that to authenticate the purchase. Google tell us that the service will work at 700,000 stores, however there is no news on when the service will be available in the UK. It’s not clear if it is the potential amendments to UK and European law that are holding up a UK launch or if it is something else, but with Apple Pay already over here paving the way, and with the huge potential for mobile payments starting to come into it’s own, Google will want to progress with this as quickly as they can.

App permissions
When you buy an app on Android, you’re greeted with a list of permissions that to many mean very little, but are in fact really important for understanding what you are allowing an app to do and have access to on your device. Android M (Google’s latest version of the Android operating system) brings with it a revised list of permissions, smaller and easier to understand than what is presented now.

Permission requests will be presented to the user on launch of the app or on the first instance that a permission is required, rather than on download of the app as it has been up until now. For some this will be quite a change, as the permission list can be the final decision maker in downloading an app from Google Play. Nevertheless, you’ll be able to revoke permissions on an app by app basis, so the control is still very firmly with you, and interestingly this is much more akin to how iOS works in terms of handling permissions.

Power management
USB-C support comes with Android M to provide faster charging times. Obviously you need a device with the new socket, and on the surface it may seem like a nuisance having a new connecting cable to deal with again, but USB-C brings with it a whole bunch of other supporting features that can be utilised if an OS supports them. See here.

Back to power management though; Android M will also come with a feature called Doze, which is designed to prevent apps from draining your battery when you’re not using your phone – this is done through detecting a lack of motion in the device and setting a number of policies as a result. In the past battery enhancements have had to rely, to varying degrees, on developers doing the necessary on their side. With Doze however, Google are much more in control of the actions being taken and the enforcing of them. During a Doze state your phone will:

  • Disable network access for everything other than high priority Google Cloud Messaging
  • Ignore Wake locks (indicators that an application needs to have the device stay on)
  • Disable Alarms scheduled with the AlarmManager class, except for alarms set with the setAlarmClock() method and AlarmManager.setAndAllowWhileIdle()
  • Prevent WiFi scans from being performed
  • Prevent syncs and jobs for your sync adapters and JobScheduler from running

When the device exits Doze state, it will execute any jobs and syncs that are pending, so it is still down to the developers how missed notifications will be handled once a device becomes active again. Google have provided a testing guide for developers here.

Offline modes for Google Maps, Chrome and YouTube
A full offline mode is coming to Maps, meaning you’ll be able to save maps and use them in the same manner as if you were connected – viewing places, looking at route details and stepping through turn by turn navigation.

With Chrome you now have the option to view a saved copy of a web page if you’re facing connection issues i.e. if you’ve fully or partially viewed a page already you can still look at it if you subsequently loose connection.

YouTube will provide an offline feature as well, available on KitKat (Android 4.4) and above, that will allow you to save a video offline onto your phone for 48 hours – great for the underground commute or long car journeys with the family.

Google Now on Tap
Google continues to develop it’s ability to push useful, pertinent information to you, at any point, with the introduction of Google on Tap. Essentially it’s Google Now without having to leave an app.

Hold the on-screen home button and a view slides up containing the knowledge graph of information based upon where you are in the OS or what app you are in, and provides a set of shortcuts to apps and content that should be useful. Combining this with Google Now’s ability to understand very non specific questions using voice search based on the context they are asked within, means this can be a very helpful and easily accessible feature.

Take a look at the use case covered in this article to get a feel for how this might work. It will be interesting to see how this and Apple’s new search features will compare over time.

Android Wear
Apple Watch has stolen a lot of attention away from Google’s wearable OS recently. To many it appears to be taking the lead in terms of what wearables can do. Those better acquainted with Android Wear may feel this isn’t the complete truth – regardless I feel Android Wear could have benefitted from a bolder display than what Google gave us this year, in order to reassert itself (especially given the head start it had in being available to the public). Here’s what Google showed in order to play catch up:

Always on apps – apps can continue to be active even in the low-energy display mode, ensuring that their information can be easily glanced at whilst on the go. Examples include checking the distance covered whilst on a run, looking at a shopping list and checking what else you still need to get, or following navigation directions using Google Maps. Have a look at the developer resources to see how to incorporate this into your Android Wear apps and to see how they will perform on older versions of the OS that do not support apps running in ambient mode.

WiFi and GPS – the latest version of Android Wear will allow your smartwatch (if it supports it) to take advantage of WiFi connectivity in order to deliver notifications and other actions to your wearable, even if you don’t have the phone that it is paired to with you. This means that if you were to leave your phone at home, on your desk, or in your car (heaven forbid), your Android Wear smartwatch will continue to bring you content as long as it has an accessible WiFi connection. Your phone, wherever it is, must have an active data connection as well, be it via WiFi or a mobile data network. This article goes into some further detail on how it all works together.

Wrist Gestures – you can now navigate through notifications and check them in more detail via a flick of your wrist. For example a flick outward for next, and one towards you for previous. Work has been done to reduce the possibility of false positives, however you can turn this functionality off if you’re the kind of person that talks with their hands and you feel that this kind thing is just asking for trouble.

Draw Emoji Characters – drawing an approximation of an emoji will see it converted to an actual image or response for the app you’re using, such as responding to a text. If your drawing skills are next to zero, there is a pull up list you can utilise instead.

Chrome Custom Tabs
At present, clicking on a web link from within an app means either loading the Chrome browser (which is a heavy context switch that is not at all customisable) or using a basic in app web view in order to develop your own custom browser solution. Google announced Chrome Custom Tabs to provide a better more practical alternative to these – allowing a developer to open a custom Chrome Window on top of the active app instead of having to open the entire Chrome app separately. You can set the toolbar colour, entry and exit animations, and add custom actions to the toolbar or overflow menu. Additionally you can pre-start the Chrome tab in order to pre-fetch content to improve the loading speed when the link is selected.

Fingerprint sensor support for future Android devices
Android M has built in support for future Android devices that will come with a fingerprint sensor. Unlocking the device, authorising payment in the Play store and through Android Pay, will all be possible via a fingerprint scan. Developers will be able to use it within their own apps too.

Cloud Test Lab
Testing is a difficult thing to conduct for Android apps, given the multitude of devices out there combined with the unfortunate variety of OS versions that they still run. To combat this Google announced Cloud Test Lab, their Android app testing service. When a developer submits an app through the developer console staging channel, Google will perform automated testing on what it considers to be the top 20 Android devices from around the world. Testing of this top 20 is free, though Google eventually plans to extend the service beyond the top 20 devices via a service that developers can pay for.

If the app crashes on any of the selected devices whilst walking through it, a video of the app leading up to the crash, along with a crash log, is sent to the developer to help them with debugging. Obviously the service will also help developers recognise layout issues on devices they may not have the means to test on normally – something that will benefit both Android developers and users in the long run.

This year it’s all about optimising and keeping up with the Joneses…
It’s evident that Apple and Google are doing a lot to improve their mobile OS offerings with this year’s iterations. Right now the focus on evolving the feature set and refining what is already there seems sensible, but next year we’ll all be expecting a lot bigger bangs from both.

WWDC 2014 Keynote showcases an enriched ecosystem and big changes for iOS developers

No new hardware announcements this time around, and no radical re-imagining of the iOS UI like last year, but still plenty of interesting reveals indicating Apple’s aims for the coming months, both from a consumer and a developer perspective.

The features showcased in the WWDC 2014 keynote should have a noticeable positive effect on the experience we get from our Apple devices.  Sure, we’ve seen some of this already, available in one way, shape or form via existing products or other OS; however here we have them better integrated into Apple’s OS ecosystem.  They put them at your fingertips.  Apple’s design effort allows you to take these features for granted and get the most out of your tech.

 

Enriching the Apple ecosystem

Apple started off by showing it’s latest version of OS X, which clearly stated their intention to bring desktop and mobile closer together through seamless integration and a shared style.  This year it was the turn of the Mac to experience a radical change in interface design; although thanks to iOS 7 last year it didn’t feel as jarring this time round, and the overall feel was still very much OS X.

Apple’s answer to Dropbox is the new online file storage service, iCloud Drive, which lets you store and access files virtually from your iPhone, iPad, Mac or even from your Windows based PC.  Handoff takes this a step further, allowing you to work on files seamlessly across devices that are signed into the same iCloud account.  For example start working on a doc on your iOS device whilst traveling, and then automatically pick up from where you left off on your Mac once you reach home or the office.

Although devices talking to each other and sharing docs via the cloud is nothing new, it’s how Apple integrates it into their devices that is interesting – removing the steps you’d normally have to work through when accessing a file.  An Apple device can use Bluetooth to prompt another device within range to pick up the file and continue from where you left off on the original – all without the need to go through directories or links in order to do so, and without the need to use a third party app.  It may be a small thing, but it’s this and other features, such as the new ability to make and receive calls from your Mac when connected to your iPhone nearby, that increases usability and really shows you where Apple is heading; continuity in an ecosystem seamlessly brought together.

 

The improvements coming with iOS 8

One of the coolest updates was to iOS 8’s push notifications, which now allow you to respond to text messages, invites etc. straight from the lock screen or from within the app that you are currently in.  This has been on my wish list for some time.

Apple catches up on the predictive typing technology front with the introduction of QuickType, which works pretty much as you’d expect, learning from your inputs, but additionally it will anticipate what you might type based on the content you are responding to, and learning from on device data (as oppose to Android, which uses Google in the cloud).  You can also now install third party keyboards.

iOS 8 also brings the Messages app up to speed, with the introduction of a number of improvements:

  • You can now name threads and easily add and remove users.
  • Opt to leave a group conversation or use the ‘Do Not Disturb’ mode to stop notifications for various durations.
  • Share locations and send audio or video messages.  A received audio message can be listened to by raising the phone to your ear to listen to it straight from the lock screen.  Audio and video messages will self-destruct after two minutes, Snapchat-style (not Mission Impossible-style, which would just be wrong).

In mail you can now swipe down to access other emails without leaving the email you’re composing.  This is one update I will find hugely useful.

The Spotlight suggestions feature now integrates results from Wikipedia, App Store, iTunes, and more, so that it is no longer restricted to just the content on your device.

Speaking of iTunes, Apple introduced Family Sharing, which allows content purchased by different family members (sharing the same payment card) to be accessible automatically from any of their devices.  Parents can be contacted when a child tries to make a purchase and choose to accept or reject it.  This is another one that will be put to good use in my household.

Finally the Photos app gets some new editing options, and Siri now has Shazam support (that integrates with iTunes) and can be voice activated “OK Google” style (but with the much more appropriate “Hey Siri” command).

 

What’s new for developers?

Heralded as the biggest release for developers since the introduction of the App Store, iOS 8 brings with it over 4,000 new APIs that will increase the functionality readily available to developers.

Apple finally revealed HealthKit during the middle of their keynote, which can collect health data from third party apps and keep track of daily activities.  All of this data is then available through the Health app.  There is also the ability to share this information with the Mayo Clinic (a well respected health organisation that has done a lot to push mobile technology integration in the medical sector) – the Health app alerting your doctor if your condition should necessitate it.

The HomeKit SDK allows third party devices certified as “Made for iPhone/iPad/iPod” to be controlled via the HomeKit app, rather then through separate applications.  These third party devices could be anything: locks, webcams, thermostats, garage doors, lights etc.  and can be grouped together into a set of actions, called “scenes”.  For example a “go to bed” scene could turn down the heating, lock doors and dim lights.  There are already some well known partners involved in this endeavour, such as Philips, Kwikset, Honeywell, iHome and Sylvania.  It will be interesting to see what kind of momentum and interest manifests around this following once iOS 8 is released later this year.

The CloudKit developer framework will provide backend processing services for iOS 8, allowing developers to build cloud-connected apps.  Sounding similar to Amazon Web Services or Microsoft Azure, Apple’s Craig Federighi promised that it would be “free, with limits“. It remains to be seen what this actually means in practice – what costs will there be and is this specifically an Apple platform offering.

Apple claims it will bring “console level” graphics to iPhones and iPads with the introduction of a new 3D API, called Metal, that will allow game developers to take real advantage of Apple’s system on a chip processing power.  With better performance than that given by the use of OpenGL, Federighi boasted draw rates of up to 10 times faster.  Crytek, Epic, EA and Unity are apparently working on demos that demonstrate this.  The Zen Garden app, built with Unreal Engine 4 and demoed during the keynote, will be available free alongside the release of iOS 8, so we’ll all be able to check it out for ourselves.  Additional iOS 8 gaming frameworks, SceneKit and SpriteKit, were also announced and are aimed at making development of 2D games (SpriteKit) and rendering of three-dimensional scenes (SceneKit) easier and more efficient.

After its introduction last year, the Touch ID API will now finally be available to developers for use in their own apps, and I expect that this signals that all iPads and iPhones going forward are likely to have the fingerprint sensor incorporated.  There are a number of apps that spring to mind that I think would benefit greatly from this.

Extensions should prove useful to developers going forward too.  Similar to Android’s use of intents, iOS apps can now offer services to each other in a likewise manner.  Taking it a step further though, not only can you share to other installed apps, you can borrow useful actions and features from them as well.

We even have widgets on iOS now.  Not on our home screen though as with Android, these widgets sit a step away within the notification centre.

Worth mentioning are the iOS 8 changes to WebKit.  It has always been frustrating not receiving the same level of performance from in app webviews as you do from the Safari app in iOS.  As of iOS 8 though, all apps will be able to use the same JavaScript engine that is used in Safari.  Everything from Google’s Chrome app to the in-app browser pop-ups seen in Facebook etc., should now be as fast as Safari.  This is fantastic news, especially with hybrid apps becoming increasingly popular as a mobile solution.

The final reveal was a big one from a development perspective.  Apple presented its own programming language called Swift, which will eventually replace Objective C.  Apple says Swift is faster than Objective C, but many will want to spend some time with it and see what others achieve before they believe that.

Swift will be able to work alongside Objective C in the same app, happily working with Cocoa and Cocoa Touch, so there’s no need to sink wholesale into Swift right away. I expect with many developers out there experienced in Objective C and many existing apps in the App store written in it, it will be a while before we see mass migration to Swift.  You can download Apple’s free introduction to the Swift programming language from the iBooks Store.  To use Apple’s own words:

Swift eliminates entire classes of unsafe code. Variables are always initialized before use, arrays and integers are checked for overflow, and memory is managed automatically. Syntax is tuned to make it easy to define your intent — for example, simple three-character keywords define a variable (var) or constant (let).

Even though Swift uses the LLVM compiler to convert its code into optimised native code, you can use “Playgrounds” to type code in and see the results straight away:

Type a line of code and the result appears immediately. If your code runs over time, for instance through a loop, you can watch its progress in the timeline assistant… When you’ve perfected your code in the playground, simply move that code into your project.

You can find links to Swift resources here.

 

And that’s that for another year

So, plenty to get excited about.  Many will argue that a number of the reveals were things we’ve already seen to varying capacities elsewhere, but it doesn’t take away from the polish and level of integration with which Apple executes its own versions.  And in some cases Apple is no longer playing catch up, providing products and services that go one step beyond those offered by other operating systems and app providers.  On top of that, some announcements, such as Swift, are without doubt truly unique and important, and we’ll be feeling the consequences of those choices for some time to come.  By way of a comparison, it will be very interesting to see what comes out of Google I/O later this month.  Stay tuned…