An open letter to American Express

Dear American Express,

I have been a loyal customer of yours for around 20 years. (Likewise my significant other.)

I typically pay my monthly statements on time and in full.

I’ve opted for paperless statements.

I pay my annual membership fee.

I even accept the fact that 7-8 times out of 10, I get charged merchant fees for paying by Amex – and in most cases I incur much higher fees than other credit or debit cards.

So, I am very surprised I have not been invited to attend your pop-up Open Air Cinema in Melbourne’s Yarra Park – especially as I live within walking distance.

It’s not like you don’t try to market other offers to me – mostly invitations to increase my credit limit, transfer outstanding balances from other credit cards, or “enjoy” lower interest rates on one-off purchases.

The lack of any offer in relation to the Open Air Cinema just confirms my suspicions that like most financial institutions, you do not really know your customers.

My point is, that you must have so much data on my spending patterns and preferences, from which you should be able to glean my interests such as film, the arts, and entertainment.

A perfect candidate for a pop-up cinema!

Next week: Life After the Royal Commission – Be Careful What You Wish For….


The General Taxonomy for Cryptographic Assets

It’s not often I get to shamelessly plug a project I have been involved with – so please indulge me in the case of Brave New Coin’s recent publication, “The General Taxonomy for Cryptographic Assets”. It’s a significant piece of work, designed to bring some structure to the classification of this new asset class.

In particular, it aims to help market participants (traders, brokers, investors, fund managers, asset managers, portfolio managers, regulators etc.) make sense of the growing list of digital currencies, as not all tokens are the same. Each one has a specific use case that needs to be understood in the context of Blockchain applications, whether decentralized protocols, or trust-less payment solutions.

Currently capturing around 60 data points and metrics on around 700 tokens, in the coming months the underlying database will double in size, and constantly maintained thereafter to keep current with the most significant assets.

Useful for portfolio screening, construction and diversification, the Taxonomy methodology and underlying database, when combined with Brave New Coin’s aggregated market data and indices will provide a 360-degree view of each asset, combining key elements of a CUSIP or ISIN record, a company directory profile and a regulatory filing.

The significance of having access to robust market data and reference data tools cannot be underestimated, given the price volatility and emerging nature of this new asset class. The Taxonomy will be presented at various Blockchain and Crypto events over the coming weeks, but for further information, the authors can be contacted at:

Next week: APAC Blockchain Conference

Big Data – Panacea or Pandemic?

You’ve probably heard that “data is the new oil” (but you just need to know where to drill?). Or alternatively, that the growing lakes of “Big Data” hold all the answers, but they don’t necessarily tell us which questions to ask. It feels like Big Data is the cure for everything, yet far from solving our problems, it is simply adding to our confusion.

Cartoon by Thierry Gregorious (Sourced from Flickr under Creative Commons – Some Rights Reserved)

There’s no doubt that customer, transaction, behavioral, geographic and demographic data points can be valuable for analysis and forecasting. When used appropriately, and in conjunction with relevant tools, this data can even throw up new insights. And when combined with contextual and psychometric analysis can give rise to whole new data-driven businesses.

Of course, we often use simple trend analysis to reveal underlying patterns and changes in behaviour. (“If you can’t measure it, you can’t manage it”). But the core issue is, what is this data actually telling us? For example, if the busiest time for online banking is during commuting hourswhat opportunities does this present? (Rather than, “how much more data can we generate from even more frequent data capture….”)

I get that companies want to know more about their customers so they can “understand” them, and anticipate their needs. Companies are putting more and more effort into analysing the data they already have, as well as tapping into even more sources of data, to create even more granular data models, all with the goal of improving customer experience. It’s just a shame that few companies have a really good single view of their customers, because often, data still sits in siloed operations and legacy business information systems.

There is also a risk, that by trying to enhance and further personalise the user experience, companies are raising their customers’ expectations to a level that cannot be fulfilled. Full customisation would ultimately mean creating products with a customer base of one. Plus customers will expect companies to really “know” them, to treat them as unique individuals with their own specific needs and preferences. Totally unrealistic, of course, because such solutions are mostly impossible to scale, and are largely unsustainable.

Next week: Startup Governance


The 3L’s that kill #data projects

The typical data project starts with the BA or systems architect asking: “fast, cheap or good – which one do you want?” But in my experience, no matter how much time you have, or how much money you are willing to throw at it, or what features you are willing to sacrifice, many initiatives are doomed to fail before you even start because of inherent obstacles – what I like to refer to as the 3L’s of data projects.

Image taken from "Computers at Work" © 1969 The Hamlyn Publishing Group

Image taken from “Computers at Work” © 1969 The Hamlyn Publishing Group

Reflecting on work I have been doing with various clients over the past few years, it seems to me that despite their commitment to invest in system upgrades, migrate their content to new delivery platforms and automate their data processing, they often come unstuck due to fundamental flaws in their existing operations:


This is the most common challenge – overhauling legacy IT systems or outmoded data sets. Often, the incumbent system is still working fine (provided someone remembers how it was built, configured or programmed), and the data in and of itself is perfectly good (as long as it can be kept up-to-date). But the old applications won’t talk to the new ones (or even each other), or the data format is not suited to new business needs or customer requirements.

Legacy systems require the most time and money to replace or upgrade. A colleague who works in financial services was recently bemoaning the costs being quoted to rewrite part of a legacy application – it seemed an astronomical amount of money to write a single line of code…

As painful as it seems, there may be little alternative but to salvage what data you can, decommission the software and throw it out along with the old mainframe it was running on!


Many data projects (especially in financial services) focus on reducing systems latency to enhance high-frequency and algorithmic securities trading, data streaming, real-time content delivery, complex search and retrieval, and multiple simultaneous user logins. From a machine-to-machine data handover and transaction perspective, such projects can deliver spectacular results – with the goal being end-to-end straight through processing in real-time.

However, what often gets overlooked is the level of human intervention – from collecting, normalizing and entering the data, to the double- and triple-handling to transform, convert and manipulate individual records before the content goes into production. For example, when you contact a telco, utility or other service provider to update your account details, have you ever wondered why they tell you it will take several working days for these changes to take effect? Invariably, the system that captures your information in “real-time” needs to wait for someone to run an overnight batch upload or someone else to convert the data to the appropriate format or yet another person to run a verification check BEFORE the new information can be entered into the central database or repository.

Latency caused by inefficient data processing not only costs time, it can also introduce data errors caused by multiple handling. Better to reduce the number of hand-off stages, and focus on improving data quality via batch sampling, error rate reduction and “capture once, use many” workflows.

Which leads me the third element of the troika – data governance (or the lack thereof).


In an ideal world, organisations would have an overarching data governance model, which embraces formal management and operational functions including: data acquisition, capture, processing, maintenance and stewardship.

However, we often see that the lack of a common data governance model (or worse, a laissez-faire attitude that allows individual departments to do their own thing) means there is little co-operation between functions, additional costs arising from multiple handling and higher error rates, plus inefficiencies in getting the data to where it needs to be within the shortest time possible and within acceptable transaction costs.

Some examples of where even a simple data capture model would help include:

  • standardising data entry rules for basic information like names and addresses, telephone numbers and postal codes
  • consistent formatting for dates, prices, measurements and product codes
  • clear data structures for parent/child/sibling relationships and related parties
  • coherent tagging and taxonomies for field types, values and other attributes
  • streamlining processes for new record verification and de-duplication

From experience, autonomous business units often work against the idea of a common data model because of the way departmental IT budgets are handled (including the P&L treatment of and ROI assumptions used for managing data costs), or because every team thinks they have unique and special data needs which only they can address, or because of a misplaced sense of “ownership” over enterprise data (notwithstanding compliance firewalls and other regulatory requirements necessitating some data separation).


One way to think about major data projects (systems upgrades, database migration, data automation) is to approach it rather like a house renovation or extension: if the existing foundations are inadequate, or if the old infrastructure (pipes, wiring, drains, etc.) is antiquated, what would your architect or builder recommend (and how much would they quote) if you said you simply wanted to incorporate what was already there into the new project? Would your budget accommodate a major retrofit or complex re-build? And would you expect to live in the property while the work is being carried out?

Next week: AngelCube15 – has your #startup got what it takes?