Apple, iOS, and the need for third-party innovation

A main use of my iPad is creating music. In my experience, iOS has provided a convenient and relatively low-cost way to explore and experiment with music synthesis, sampling, looping, audio processing, programming, sound design, production and dissemination of my semi-amateur home-studio recordings. The numerous developers involved in creating music-related apps have produced some of the most innovative products available.

At times, these developers have pushed the envelope when it comes to app design, functionality and interoperability. Even though many of these developers are involved with the design and production of hardware instruments and technology, and writing software for laptop and desktop computers, they also recognise that the iPad offered another way to interface with digital music tools. In some cases, iPad apps can connect to or interact with their hardware and software counterparts (e.g., touchAble).

Elsewhere, developer vision has pre-empted and even overtaken Apple’s own product design. A good example is IAA (Inter-App Audio), introduced by Apple in 2013. While some app developers were quick to adopt this feature into their own products, in the same year the team at Audiobus took this functionality to another level, with a fully integrated platform within iOS that allows multiple apps to be connected virtually. Eventually, in 2019, Apple countered by upgrading their own Audio Unit (AU) infrastructure that introduced another way to connect separate apps.

There remain some anomalies in Apple’s approach to competing music apps and their commercial models. Although Apple has enabled developers to offer in-app purchases and upgrades, it is noticeable that to this day, Bandcamp does not sell digital music via its mobile app (thought to be due to Apple’s hefty sales commission on digital content?); but Bandcamp customers can purchase physical goods via the app. While over on the SoundCloud app, users can purchase in-app subscriptions offering ad-free streaming and off-line content, but Spotify customers cannot purchase similar premium streaming services within the corresponding app.

The latest move from Apple has got some developers quite excited. As well as bringing its professional video editing suite, Final Cut Pro, to iPad, Apple has launched an iPad version of Logic Pro, its professional music DAW (Digital Audio Workstation). Now, I don’t have a problem with this, and I can see the attraction for both app developers and Logic Pro users.

I myself use Ableton Live (and not Logic Pro or Apple’s consumer-level product, GarageBand), so I am not planning to add another desktop DAW. Besides, Ableton enables third party developers to integrate their AU and VST plug-ins on Mac. In addition, Ableton has launched a mobile app, Ableton Note, that can interact with the desktop program, which just confirms the co-existence of these platforms, and user preference for interoperability.

My concern is that with the introduction of Logic Pro on iOS, Apple may close off some inter-app functionality to third party apps if they do not support integration with Logic Pro. We’ve seen the way Apple can shut down external innovation: without getting too technical, until 2021, and with a little effort, users could run iOS music apps on their Macs, and within DAWs such as Ableton. Apple then closed off that option, but more recently has enabled iOS-derived AUv3 plugins to run on M1 chip-enabled Macs.

Hopefully, Apple recognises that an open ecosystem encourages innovation and keeps people interested in their own products, as well as those from third-party developers.

Next week: Crown Court TV

App Overload

Following a recent upgrade to Apple’s iOS software, I found myself forced into some serious housekeeping on my iPad. I hadn’t realised how many dormant apps I had accumulated over the years, so I took the opportunity to do some culling.

First, there were apps that could no longer be accessed from the app store. These are programs that have been removed by their developers, or are no longer available from the Australian app store (yes, even in this digital day and age, geo-blocking still exists). I estimate that these accounted for about 20-30% of the total apps I have ever downloaded.

Second, apps that are not supported by the current version of iOS, because they have not yet been updated by their developers. (Luckily, I keep an older version of iOS on a separate iPad, which can allow me to retrieve some of these apps via some digital archeology.) These represented another 15-25% of my apps (a variable number, given that some of them may get upgraded).

Third, apps that I seldom or never use. Thankfully, the iPad Storage settings provide the “Last Used” date, but don’t enable users to rank by chronological use (or by frequency of usage; the “Search” function within Storage only lists apps alphabetically). Perhaps Apple can refine the Storage Management to help users better manage over-looked/under-used apps? Anyway, these forgotten or neglected apps accounted for another 25-30%.

In total, I estimate that up to 75% of my iPad apps were redundant, through disuse, obsolescence or inaccessibility. Research shows that 25% of apps we download are only used once, so unless these are free products, it feels like a large chunk of the US$900+ bn in app purchases could be going to waste…

Next week: Apple, iOS, and the need for third-party innovation

 

 

Customer Experience vs Process Design

Why is customer experience so poor when it comes to process design? Regardless of the product or service, it can be so frustrating when having to deal with on-boarding, product upgrades, billing, payment, account updates and customer service. Banks, telcos, utilities and government services are particularly bad, but I am seeing more and more examples in on-line market places and payment solutions.

Often, it feels like the process design is built entirely according to the providers’ internal operating structures, and not around the customer. The classic example is when customers have to talk to separate sales, product, technical support and finance teams – and none of them talk to each other, and none of them know the full customer or product journey end to end.

Even when you do manage to talk to human beings on the phone, rather than a chat bot, as a customer you have to repeat yourself at every stage in the conversation, and you can end up having to train front line staff on how their products actually work or what the process should be to upgrade a service, pay a bill or trouble-shoot a technical problem.

You get the impression that many customer-facing team members never use their own services, or haven’t been given sufficient training or information to handle customer enquiries, and don’t have adequate authority to resolve customer problems.

On many occasions, I get the customer experience equivalent of “computer says ‘no’…” when it appears impossible to navigate a particular problem. The usual refrain is the “system” means things can only be done a certain way, regardless of the inconvenience to the customer, or the lack of thought that has gone into the “process”.

As I always remind these companies, a “process” is only as good as the people who design, build and operate it – and in blaming the “system” for a particular failing or inadequacy they are in effect criticising their own organisations and their own colleagues.

Next week: App Overload

 

 

AI vs IP

Can Artificial Intelligence software claim copyright in any work that was created using their algorithms?

The short answer is “no”, since only humans can establish copyright in original creative works. Copyright can be assigned to a company or trust, or it can be created under various forms of creative commons, but there still needs to be a human author behind the copyright material. While copyright may lapse over time, it then becomes part of the public domain.

However, the extent to which a human author can claim copyright in a work that has been created with the help of AI is now being challenged. A recent case in the USA has determined that the author of a graphic novel, which included images created using Midjouney, cannot claim copyright in those images. While it was accepted that the author devised the text and other prompts that the software used as the generative inputs, the output images themselves could not be the subject of copyright protection – meaning they are either in the public domain, or they fall under some category of creative commons? This case also indicates that, in the USA at least, failing to declare the use of AI tools in a work when applying for copyright registration may result in a rejected application.

Does this decision mean that the people who write AI programmes could claim copyright in works created using their software? Probably not – as this would imply that Microsoft could establish copyright in every novel written using Word, especially its grammar and spelling tools.

On the other hand, programmers and software developers who use copyright material to train their models may need to obtain relevant permission from the copyright holders (as would anyone using the AI tools and who uses copyright content as prompts), unless they could claim exemptions under “fair dealing” or “fair use” provisions.

We’re still early in the lengthy process whereby copyright and other intellectual property laws are tested and re-calibrated in the wake of AI. Maybe the outcomes of future copyright cases will depend on whether you are Ed Sheeran or Robin Thicke….

Next week: Customer Experience vs Process Design