What the *%@#? Dave McClure vents his spleen…

The final Lean Start Melbourne event of 2014 was a Q&A with Dave McClure, tech entrepreneur, early-stage investor and founder of 500 Startups. It was certainly an ear-opening experience, as Dave laced his comments with enough expletives to fund a small start-up (if only the organisers had thought to provide a swear jar…).

But while he was vociferous in his refusal to answer questions like “what’s hot?”, or “where’s the next big thing?”, he did provide some refreshing insights on how founders and investors need to adjust their expectations on funding and returns.

The event was hosted by inspire9, with sponsorship from BlueChilli, General Assembly, and Loud & Clear. Adrian Stone from Investors’ Organisation was acknowledged for helping to bring Dave to Australia, and Amanda Gome was the MC for the evening.

Dave’s investing model is basically a numbers game – identify a large enough pool of startup opportunities, place smaller “bets” on each one, in the expectation that only 10% will succeed, and of those, only 10% will be really successful, and very, very few will reach an IPO – but the spread of successful bets should each return between 5x and 20x. Whereas, some investors still try to “bet on unicorns”, in the expectation of a 20x-25x exit every time. Such opportunities will be increasingly unlikely, as the technology costs of production continue to decrease, therefore startups don’t require the same level or type of funding.

Based on current trends, Dave sees huge potential in video commerce, mobile video, and anything that monetizes search – e.g., influencing followers via social media, and converting this traction to sales driven by personalised recommendations. He’s also big on Spanish- and Arabic-speaking markets, and “anything that arbitrages sexism and racism” – hence his interest in women and minority entrepreneurs.

Dave’s advice is pretty simple: get the product, market and revenue model right, and then build scale into the business as quickly as possible. As such, he hates people asking him his opinion on their startup ideas (“what do I know?”); instead, he emphasises the need to get paying (and profitable) end users plus building scale through marketing as the true proof of concept.

Throughout the evening, Dave talked a lot about unit economics – not just production costs, but the real cost of customer acquisition, and time to convert leads to sales. It was also interesting that unlike some speakers at previous Lean Startup events, he was not particularly negative towards startups developing enterprise solutions – rather, he prefers to segment clients based upon their decision-making and purchasing limits. So, he looks at revenues based on the respective number of end users, SME customers and enterprise clients, because of their different price points and procurement methods, as well as the different customer acquisition costs.

Finally, he encouraged potential startups to think of the “most boring and mindless” business activities or processes, and figure out ways to make them more interesting via apps that use gamification and social media tools.

 

The New Alchemy – Turning #BigData into Valuable Insights

Here’s the paradox facing the consumption and analysis of #BigData: the cost of data collection, storage and distribution may be decreasing, but the effort to turn data into unique, valuable and actionable insights is actually increasing – despite the expanding availability of data mining and visualisation applications.

One colleague has described the deluge of data that businesses are having to deal with as “the firehose of information”. We are almost drowning in data and most of us are navigating up river without a steering implement. At the risk of stretching the aquatic metaphor, it’s rather like the Sorcerer’s Apprentice: we wanted “easy” data, so the internet, mobile devices and social media granted our wish in abundance. But we got lazy/greedy, forgot how to turn the tap off and now we can’t find enough vessels to hold the stuff, let alone figure out what we are going to do with it. Switching analogies, it’s a case of “can’t see the wood for the trees”.

Perhaps it would be helpful to provide some terms of reference: what exactly is “big data”?

First, size definitely matters, especially when you are thinking of investing in new technologies to process more data more often. For any database less than say, 0.5TB, the economies of scale may dissuade you from doing anything other than deploy more processing power and/or capacity, as opposed to paying for a dedicated, super-fast analytics engine. (Of course, the situation also depends on how fast the data is growing, how many transactions or records need to be processed, and how often those records change.)

Second, processing velocity, volume and data variety are also factors – for example, unless you are a major investment bank with a need for high-frequency, low-latency algorithmic market trading solutions, then you can probably make do with off-the-shelf order routing and processing platforms. Even “near real-time” data processing speeds may be overkill for what you are trying to analyze. Here’s a case in point:

Slick advertorial content, and I agree that the insights (and opportunities) are in the delta – what’s changed, what’s different? But do I really need to know what my customers are doing every 15 seconds? For a start, it might have been helpful to explain what APM is (I had to Google it, and CA did not come up in the Top 10 results). Then explain what it is about the resulting analytics that NAB is now using to drive business results. For instance, what does it really mean if peak mobile banking usage is 8-9am (and did I really need an APM solution to find this out?) Are NAB going to lease more mobile bandwidth to support client access on commuter trains? Has NAB considered push technology to give clients account balances at scheduled times? Is NAB adopting technology to shape transactional and service pricing according to peak demand? (Note: when discussing this example with some colleagues, we found it ironic that a simple inter-bank transfer can still take several days before the money reaches your account…)

Third, there are trade-offs when dealing with structured versus non-structured data. Buying dedicated analytics engines may make sense when you want to do deep mining of structured data (“tell me what I already know about my customers”), but that might only work if the data resides in a single location, or in multiple sites that can easily communicate with each other. Often, highly structured data is also highly siloed, meaning the efficiency gains may be marginal unless the analytics engine can do the data trawling and transformation more effectively than traditional data interrogation (e.g., query and matching tools). On the other hand, the real value may be in unstructured data (“tell me something about my customers I don’t know”), typically captured in a single location but usually monitored only for visitor volume or stickiness (e.g., a customer feedback portal or user bulletin board).

So, to data visualisation.

Put simplistically, if a picture can paint a thousand words, data visualisation should be able to unearth the nuggets of gold sitting in your data warehouse. Our “visual language” is capable of identifying patterns as well as discerning abstract forms, of describing subtle nuances of shade as well as defining stark tonal contrasts. But I think we are still working towards a visual taxonomy that can turn data into meaningful and actionable insights. A good example of this might be so-called sentiment analysis (e.g., derived from social media commentary), where content can be weighted and scored (positive/negative, frequency, number of followers, level of sharing, influence ranking) to show what your customers might be saying about your brand on Twitter or Facebook. The resulting heat map may reveal what topics are hot, but unless you can establish some benchmarks, or distinguish between genuine customers and “followers for hire”, or can identify other connections with this data (e.g., links with your CRM system), it’s an interesting abstract image but can you really understand what it is saying?

Another area where data visualisation is being used is in targeted marketing based on customer profiles and sales history (e.g., location-based promotion using NFC solutions powered by data analytics). For example, with more self-serve check-outs, supermarkets have to re-think where they place the impulse-buy confectionary displays (and those magazine racks that were great for killing time while queuing up to pay…). What if they could scan your shopping items as you place them in your basket, and combined with what they already know about your shopping habits, they could map your journey around the store to predict what’s on your shopping list, thereby prompting you via your smart phone (or the basket itself?) towards your regular items, even saving you time in the process. And then they reward you with a special “in-store only” offer on your favourite chocolate. Sounds a bit spooky, but we know retailers already do something similar with their existing loyalty cards and reward programs.

Finally, what are some of the tools that businesses are using? Here are just a few that I have heard mentioned recently (please note I have not used any of these myself, although I have seen sales demos of some applications – these are definitely not personal recommendations, and you should obviously do your own research and due diligence):

For managing and distributing big data, Apache Hadoop was name-checked at a financial data conference I attended last month, along with kdb+ to process large time-series data, and GetGo to power faster download speeds. Python was cited for developing machine learning and even predictive tools, while DataWatch is taking its data transformation platform into real-time social media sentiment analysis (including heat and field map visualisation). YellowFin is an established dashboard reporting tool for BI analytics and monitoring, and of course Tableau is a popular visualisation solution for multiple data types. Lastly, ThoughtWeb combines deep data mining (e.g., finding hitherto unknown connections between people, businesses and projects via media coverage, social networks and company filings) with innovative visualisation and data display.

Next week: a few profundities (and many expletives) from Dave McClure of 500 Startups

Australia 3.0 – beyond the mining boom….

In the wake of the G20 Brisbane meeting, Australia’s place in the world has been under scrutiny, in particular our role in Asia Pacific. With the announcement of a Free Trade Agreement with China (following similar treaties with Japan and Korea), a flurry of extra-mural visits by G20 leaders, and our current Presidency of the UN Security Council, you’d be forgiven for thinking that Australia was now front and centre of the world stage. Well, I hate to disappoint anyone, but I’ve recently spent 3 weeks overseas, and the only news I heard from home was the death of Gough Whitlam.

However, this does seem like a timely opportunity* to consider the question: “What’s next?” after the resources bubble has burst. This was the topic of discussion at this month’s Directors Suite luncheon, where I delivered some opening remarks based on the following text: 

Introduction

Our theme of Australia 3.0 is not to be confused with the think tank of the same name. Although it is interesting to note that their four areas of interest are Infrastructure, Health, Government Services and Mining.**

Historical Perspective?

I’m not a political or economic historian, but I would suggest that Australia’s policy agenda has followed a rough but discernible narrative:

  • Australia 1.0 – from the launch of Federation to the 1960’s – post-colonial era, bookended by WWI and the Vietnam War, and despite the dominant figure of Menzies, largely a protectionist, semi-nationalised, highly collective and quasi-socialist mixed economy
  • Australia 1.5 – The Whitlam Upgrade (or Experiment) – radical, short-lived, too much too soon?
  • Australia 2.0 – The Hawke/Keating System Reboot – currency and interest rate reforms, major privatization, re-engagement with Asia
  • Australia 2.5 – Rudd/Gillard bug fixes – a micro-managed response to the GFC, but despite the hype/promise, not much was actually achieved in macro terms, witness the 2020 summit…

What Issues Will Define Australia 3.0?

If we take it as read that there are demographic and environmental challenges ahead, I see that there are 5 Key Drivers for social and economic change, each with their own particular issues and consequences:

1. THE BIG ONE:
Economic activity post-GFC, post-mining boom, post-dollar parity
The “new normal”: slow/low/no growth and the struggle for sustainable growth; sunset on the baby boomer era; how to get internationally competitive, streamline SME regulations, remove the burden of tax administration
2. THE TECH TREND:
The age of mobile, cloud and social technology
Digital innovation backed by a new spirit of Gen Y/Gen I entrepreneurial start-ups; no more “job for life” employment – 1.3m non-employing businesses in Australia…. (40% of US workers will be freelance/self-employed by 2020)
3. THE END OF EMPIRE(S):
Declining respect for/relevance of political structures & public institutions
Minority governments, heightened clash of ideologies, power shift from Federal/State to Regional/Community; also reflects a failure of leadership within political parties, unions, corporations, religious bodies, professional sporting codes, armed forces etc.
4. OUR PLACE IN THE REGION
Free Trade Agreements with Asia, realigning regional interests
At what price? Implications for our traditional political allegiances? Challenges to Australia’s regional relevance if it’s one-way traffic only? Threat to food security?
5. NEW NATION BUILDING
Upgrading declining infrastructure and building capacity for the future
Who decides? Who pays? NIMBY? Too little too late?

Some international perspectives

Based on my recent travels to the UK and Hong Kong, we can make some interesting comparisons with conditions here at home. For example, like Australia, both UK and HK have very unpopular governments at present (but for different reasons); they are currently enjoying relatively higher (albeit still sluggish) GDP growth rates compared to other developed economies; and like the Australian dollar, Sterling has also declined recently against the US dollar (HK’s dollar is, of course, pegged to the US).

I got the impression that the cost of living in the UK has not gone up much since my last visit just over two years ago, although like Australia’s capital cities, London house prices are probably achieving/exceeding pre-GFC levels. (However, GDP growth is mainly due to pent-up demand from continuing austerity measures.) Relations with the EU are strained by budget issues and immigration polices. Following the Scottish referendum, there has been increased discussion on regional devolution, and Manchester looks set to acquire new regional powers (similar to the Mayor of London model). London remains as an important international financial centre, while selected manufacturing and services industries are enjoying renewed growth. There were numerous signs of major infrastructure projects (notably the Crossrail in London) and urban renewal initiatives (such as the Manchester City Library upgrade).

Meanwhile, HK is going through yet another constitutional crisis under the post-handover Basic Law (“One Country, Two Systems”). The Occupy Central protests, aka the Umbrella Movement were the most orderly demonstrations I have ever seen. The protests are multi-faceted; they are not just about Universal Suffrage, but also reflect social, economic and cultural struggles/challenges. There is another (speculative) property boom, fuelled in part by new subway systems, new commercial buildings, and a harbour front tunnel to by-pass the CBD; and in part by hundreds of new apartments (attracting mainland buyers). Property prices are at another all-time high (new developments can cost US$4-5m for less than 1,000 sq. ft.) – no wonder that about half of the population now live in public housing projects, and nearly one-fifth are estimated to be living below the poverty line. But food, clothing, public transport, eating out and general consumer goods can still be bought at modest prices (as long as you avoid high-end brands in high-end malls).

Making The Right Connections

I spent two days at a major Asia Pacific financial services conference in HK aimed at stock exchanges, banks and data vendors, where I only saw a couple of delegates from Australian banks, nobody from the ASX and no-one from the Australian superannuation or asset management sectors.

Does this matter? I think it does.

There was much talk about the convertibility (or internationalization) of the RMB, and one currency broker I spoke to suggested that Australia will be the next target for major RMB investment – it’s not just about Toorak mansions. There are huge RMB deposits sitting in HK, and Australia is an approved investment destination (and Australian-managed funds are an approved asset class) for approved mainland investors. The money has to go somewhere.***

By standalone stock market capitalisation, ASX is ranked 14th globally, but represents only about 2.2% by value. Furthermore, when taking into account recent stock exchange mergers and the new HK/Shanghai Stock Connect trading platform, the combined Hong Kong/Shanghai/Shenzhen market cap will leapfrog into 2nd place globally, and into 1st place in Asia Pacific, displacing Japan from its long-held position. And even though conference delegates often talked about the 4 key regional markets of HK, Japan, Singapore and Australia, the ASX comprises a mere 6% of regional market value, and the only exchange ASX has had serious (but failed) merger talks with is Singapore – which does not even make the global top 20.

The ASX market cap is $1.5tn; total superannuation funds and assets under management are about $1.6tn; while the equity in family owned businesses that needs to be refinanced over the next 5-10 years is estimated to be about $3.5tn.

Even financial market experts in Asia were acknowledging that wealth management, retirement planning and private banking services are gaining more significance than IPOs and equities trading. This in turn places greater emphasis on long-term investments, asset management for future returns, a new role for private equity, and more allocations to fixed income and bonds. But regulatory and operating costs threaten to erode any value that is being created in these asset classes, unless service providers and intermediaries can generate better efficiencies and/or develop additional, high-value products and services.

For our part, do we need to explore the role of alternative stock exchanges and non-traditional fund-raising platforms (especially for emerging companies and infrastructure projects)? And what is happening with Australia’s anticipated role as a regional fund and asset manager?

Implications for NEDs

As Non-Executive Directors, does this mean we should be shifting our focus from the “holy grail” of a seat on a public board, and instead look at how we can help, support and build value in the small businesses that will continue to be the long-term drivers of economic growth, and ensure that the boards of super funds have adequate governance?

Footnotes:

*We were not alone: “Head of PwC Australia addresses National Press Club”

**See my own “3 Pillars of the Digital Economy”

***As part of the FTA with Australia, China has opened a RMB clearing house in Sydney, and granted Australia a portion of RQFII asset allocation. And soon after the FTA was announced, the NSW Treasury issued an RMB bond.

Next week: Managing Big Data Analytics and Visualization

 

Who’s making money from market data?

In recent years, market data vendors and their clients have been fixated on supporting the demand for low-latency feeds to support high-frequency, algorithmic and dark pool trading while simultaneously responding to the post-GFC regulatory environment. New regulations continue to place increased operating burdens and costs on market participants, with a current focus on know your customer (KYC), pre-trade analytics and benchmark transparency.

For banks and asset managers, the cost of managing data is now seen as big an issue as the cost of acquiring the data itself. Furthermore, the need to meet regulatory obligations at every stage of every client transaction is adding to operating expenses – costs which cannot easily be recovered, thereby diminishing previously healthy transactional margins.

I was in Hong Kong recently, and had the opportunity to attend the Asia Pacific Financial Information Conference, courtesy of FISD. This annual event, the largest of its kind in the region, brings together stock exchanges, data vendors and financial institutions. It has been a few years since I last attended this conference, so it was encouraging to see that delegate numbers have continued to grow, although of the many stock exchanges in the region, only a few had taken exhibition stands; and representation from among buy-side institutions and asset managers was still comparatively low. However, many major sell-side institutions and plenty of vendors were in attendance, along with a growing number of service providers across data networking, hosting and management.

Speaking to delegates, it was clear that there is a risk of regulation overload: not just the volume, but also the complexity and cost of compliance. Plus, it felt like that despite frequent industry consultation, there appears to be limited co-ordination between the various market regulators, resulting in overlap between jurisdictions and duplication across different regulatory functions. Are any of these regulations having the desired effect, or simply creating unforeseen outcomes?

One major post-GFC development has been the establishment of a common legal entity identifier (LEI) for issuers of securities and their counterparts. (This was in direct response to the Lehman collapse, as a result of a failure or inability to correctly and accurately identify counterparty risk in their trading portfolios, especially for derivatives such as credit default swaps.)  However, despite a coordinated international effort, a published standard for the common identifier, and a network of approved LEI issuers, progress in assigning LEIs has been slow (especially in Asia Pacific), and coverage does not reflect market depth. For example, one data manager estimated that of the 20,000 reportable entities that his bank deals with, only 5,000 had so far been assigned LEIs.

Financial institutions need to consume ever more market data, for more complex purposes, and at multiple stages of the securities trading life-cycle:

  • pre-trade analysis (especially to meet KYC obligations);
  • trade transaction (often using best execution forums);
  • post-trade confirmation, settlement and payment;
  • portfolio reconciliation;
  • asset valuation (and in the absence of mark-to-market pricing, meaning evaluated pricing, often requiring more than one independent source);
  • processing corporate actions (in a consistent and timely fashion, and taking account of different taxation rules);
  • financial reporting and accounting standards (local and global); and
  • a requirement to provide more transparency around benchmarks (and other underlying data used in the creation and administration of market indices, and in constructing investable products).

Yet with lower trading volumes and increased compliance costs, this inevitably means that operating margins are being squeezed. Which is likely having most impact on data vendors, since data is increasingly seen as a commodity, and the cost of acquiring new data sets has to be offset against both the on boarding and switching costs and the costs of moving data around to multiple users, applications and locations.

The overloaded data managers from the major financial institutions said they wished stock exchanges and vendors would adopt more common industry standards for data licensing and pricing. Which seems reasonable, until you hear the same data managers claim they each have their own particular requirements, and therefore a “one size fits all” approach won’t work for them. Besides, whereas in the past, data was either sold on an enterprise-wide basis, or on a per-user basis, now data usage is divided between:

  • human users and machine consumption;
  • full access versus non-display only;
  • internal and external use;
  • “as is” compared to derived applications; and
  • pre-trade and post-trade execution.

Oh, and then there’s the ongoing separation of real-time, intraday, end-of-day and static data.

This all raises the obvious question: if more data consumption does not necessarily mean better margins for data vendors (despite the need to use the same data for multiple purposes), who is making money from market data?

While the stock exchanges are the primary source of market data for listed equities and exchange-traded securities, pricing data for OTC securities and derivatives has to be sourced from dealers, inter-bank brokers, contributing traders and order confirmation platforms. The major data vendors have done a good job over the years of collecting, aggregating and distributing this data – but now, with a combination of cost pressures and advances in technology, new providers are offering to help clients to manage the sourcing, processing, transmission and delivery of data. One conference delegate commented that the next development will be in microbilling (i.e., pricing based on actual consumption of each data item by individual users for specific purposes) and suggested this was an opportunity for a disruptive newcomer.

Finally, other emerging developments included the use of social media in market sentiment analysis (e.g., for algo-based trading), data visualisation, and the deployment of dedicated apps to manage “big data” analytics.

Next week: Australia 3.0