In recent years, market data vendors and their clients have been fixated on supporting the demand for low-latency feeds to support high-frequency, algorithmic and dark pool trading while simultaneously responding to the post-GFC regulatory environment. New regulations continue to place increased operating burdens and costs on market participants, with a current focus on know your customer (KYC), pre-trade analytics and benchmark transparency.
For banks and asset managers, the cost of managing data is now seen as big an issue as the cost of acquiring the data itself. Furthermore, the need to meet regulatory obligations at every stage of every client transaction is adding to operating expenses – costs which cannot easily be recovered, thereby diminishing previously healthy transactional margins.
I was in Hong Kong recently, and had the opportunity to attend the Asia Pacific Financial Information Conference, courtesy of FISD. This annual event, the largest of its kind in the region, brings together stock exchanges, data vendors and financial institutions. It has been a few years since I last attended this conference, so it was encouraging to see that delegate numbers have continued to grow, although of the many stock exchanges in the region, only a few had taken exhibition stands; and representation from among buy-side institutions and asset managers was still comparatively low. However, many major sell-side institutions and plenty of vendors were in attendance, along with a growing number of service providers across data networking, hosting and management.
Speaking to delegates, it was clear that there is a risk of regulation overload: not just the volume, but also the complexity and cost of compliance. Plus, it felt like that despite frequent industry consultation, there appears to be limited co-ordination between the various market regulators, resulting in overlap between jurisdictions and duplication across different regulatory functions. Are any of these regulations having the desired effect, or simply creating unforeseen outcomes?
One major post-GFC development has been the establishment of a common legal entity identifier (LEI) for issuers of securities and their counterparts. (This was in direct response to the Lehman collapse, as a result of a failure or inability to correctly and accurately identify counterparty risk in their trading portfolios, especially for derivatives such as credit default swaps.) However, despite a coordinated international effort, a published standard for the common identifier, and a network of approved LEI issuers, progress in assigning LEIs has been slow (especially in Asia Pacific), and coverage does not reflect market depth. For example, one data manager estimated that of the 20,000 reportable entities that his bank deals with, only 5,000 had so far been assigned LEIs.
Financial institutions need to consume ever more market data, for more complex purposes, and at multiple stages of the securities trading life-cycle:
- pre-trade analysis (especially to meet KYC obligations);
- trade transaction (often using best execution forums);
- post-trade confirmation, settlement and payment;
- portfolio reconciliation;
- asset valuation (and in the absence of mark-to-market pricing, meaning evaluated pricing, often requiring more than one independent source);
- processing corporate actions (in a consistent and timely fashion, and taking account of different taxation rules);
- financial reporting and accounting standards (local and global); and
- a requirement to provide more transparency around benchmarks (and other underlying data used in the creation and administration of market indices, and in constructing investable products).
Yet with lower trading volumes and increased compliance costs, this inevitably means that operating margins are being squeezed. Which is likely having most impact on data vendors, since data is increasingly seen as a commodity, and the cost of acquiring new data sets has to be offset against both the on boarding and switching costs and the costs of moving data around to multiple users, applications and locations.
The overloaded data managers from the major financial institutions said they wished stock exchanges and vendors would adopt more common industry standards for data licensing and pricing. Which seems reasonable, until you hear the same data managers claim they each have their own particular requirements, and therefore a “one size fits all” approach won’t work for them. Besides, whereas in the past, data was either sold on an enterprise-wide basis, or on a per-user basis, now data usage is divided between:
- human users and machine consumption;
- full access versus non-display only;
- internal and external use;
- “as is” compared to derived applications; and
- pre-trade and post-trade execution.
Oh, and then there’s the ongoing separation of real-time, intraday, end-of-day and static data.
This all raises the obvious question: if more data consumption does not necessarily mean better margins for data vendors (despite the need to use the same data for multiple purposes), who is making money from market data?
While the stock exchanges are the primary source of market data for listed equities and exchange-traded securities, pricing data for OTC securities and derivatives has to be sourced from dealers, inter-bank brokers, contributing traders and order confirmation platforms. The major data vendors have done a good job over the years of collecting, aggregating and distributing this data – but now, with a combination of cost pressures and advances in technology, new providers are offering to help clients to manage the sourcing, processing, transmission and delivery of data. One conference delegate commented that the next development will be in microbilling (i.e., pricing based on actual consumption of each data item by individual users for specific purposes) and suggested this was an opportunity for a disruptive newcomer.
Finally, other emerging developments included the use of social media in market sentiment analysis (e.g., for algo-based trading), data visualisation, and the deployment of dedicated apps to manage “big data” analytics.
Next week: Australia 3.0
Pingback: The future of #FinTech is in Enterprise Solutions | Content in Context