The Crypto Conversation

A short post this week – mainly to give a shout out to my colleague, Andy Pickering, and the rest of the team at Brave New Coin. Andy kindly invited me to help celebrate the 250th edition of The Crypto Conversation, his regular podcast that has featured a pantheon of leading characters from the crypto and blockchain industry. On this recent edition, we talk about my journey into crypto, the highs (and lows) after six years in the industry, some aspects of “trust”, the usual Crypto Conversation “Hot Takes” and of course, a slightly contentious discussion on science fiction. Enjoy.

Listen here:

Spotify

Apple

Libsyn

Next week: The bells, the bells….

 

The General Taxonomy for Cryptographic Assets

It’s not often I get to shamelessly plug a project I have been involved with – so please indulge me in the case of Brave New Coin’s recent publication, “The General Taxonomy for Cryptographic Assets”. It’s a significant piece of work, designed to bring some structure to the classification of this new asset class.

In particular, it aims to help market participants (traders, brokers, investors, fund managers, asset managers, portfolio managers, regulators etc.) make sense of the growing list of digital currencies, as not all tokens are the same. Each one has a specific use case that needs to be understood in the context of Blockchain applications, whether decentralized protocols, or trust-less payment solutions.

Currently capturing around 60 data points and metrics on around 700 tokens, in the coming months the underlying database will double in size, and constantly maintained thereafter to keep current with the most significant assets.

Useful for portfolio screening, construction and diversification, the Taxonomy methodology and underlying database, when combined with Brave New Coin’s aggregated market data and indices will provide a 360-degree view of each asset, combining key elements of a CUSIP or ISIN record, a company directory profile and a regulatory filing.

The significance of having access to robust market data and reference data tools cannot be underestimated, given the price volatility and emerging nature of this new asset class. The Taxonomy will be presented at various Blockchain and Crypto events over the coming weeks, but for further information, the authors can be contacted at: contact@bravenewcoin.com

Next week: APAC Blockchain Conference

Assessing Counterparty Risk post-GFC – some lessons for #FinTech

At the height of the GFC, banks, governments, regulators, investors and corporations were all struggling to assess the amount of credit risk that Lehman Brothers represented to global capital markets and financial systems. One of the key lessons learnt from the Lehman collapse was the need to take a very different approach to identifying, understanding and managing counterparty risk – a lesson which fintech startups would be well-advised to heed, but one which should also present new opportunities.

In Lehman’s case, the credit risk was not confined to the investment bank’s ability to meet its immediate and direct financial obligations. It extended to transactions, deals and businesses where Lehman and its myriad of subsidiaries in multiple jurisdictions provided a range of financial services – from liquidity support to asset management; from brokerage to clearing and settlement; from commodities trading to securities lending. The contagion risk represented by Lehman was therefore not just the value of debt and other obligations it issued in its own name, but also the exposures represented by the extensive network of transactions where Lehman was a counterparty – such as acting as guarantor, underwriter, credit insurer, collateral provider or reference entity.

Before the GFC

Counterparty risk was seen purely as a form of bilateral risk. It related to single transactions or exposures. It was mainly limited to hedging and derivative positions. It was confined to banks, brokers and OTC market participants. In particular, the use of credit default swaps (CDS) to insure against the risk of an obiligor (borrower or bond issuer) failing to meet its obligations in full and on time.

The problem is that there is no limit to the amount of credit “protection” policies that can be written against a single default, much like the value of stock futures and options contracts being written in the derivatives markets can outstrip the value of the underlying equities. This results in what is euphemistically called market “overhang”, where the total face value of derivative instruments trading in the market far exceeds the value of the underlying securities.

As a consequence of the GFC, global markets and regulators undertook a delicate process of “compression”, to unwind the outstanding CDS positions back to their core underlying obligations, thereby averting a further credit squeeze as liquidity is released back into the market.

Post-GFC

Counterparty risk is now multi-dimensional. Exposures are complex and inter-related. It can apply to any credit-related obligation (loans, stored value cards, trade finance, supply chains etc.). It is not just a problem for banks, brokers and intermediaries. Corporate treasurers and CFOs are having to develop counterparty risk policies and procedures (e.g., managing individual bank lines of credit or reconciling supplier/customer trading terms).

It has also drawn attention to other factors for determining counterparty credit risk, beyond the nature and amount of the financial exposure, including:

  • Bank counterparty risk – borrowers and depositors both need to be reassured that their banks can continue to operate if there is any sort of credit event or market disruption. (During the GFC, some customers distributed their deposits among several banks – to diversify their bank risk, and to bring individual deposits within the scope of government-backed deposit guarantees)
  • Shareholder risk – companies like to diversify their share registry, by having a broad investor base; but, if stock markets are volatile, some shareholders are more likely to sell off their shares (e.g., overseas investors and retail investors) which impacts the market cap value when share prices fall
  • Concentration risk – in the past, concentration risk was mostly viewed from a portfolio perspective, and with reference to single name or sector exposures. Now, concentration risk has to be managed across a combination of attributes (geographic, industry, supply chain etc.)

Implications for Counterparty Risk Management

Since the GFC, market participants need to have better access to more appropriate data, and the ability to interrogate and interpret the data, for “hidden” or indirect exposures. For example, if your company is exporting to, say Greece, and you are relying on your customers’ local banks to provide credit guarantees, how confidant are you that the overseas bank will be able to step in if your client defaults on the payment?

Counterparty data is not always configured to easily uncover potential or actual risks, because the data is held in silos (by transactions, products, clients etc.) and not organized holistically (e.g., a single view of a customer by accounts, products and transactions, and their related parties such as subsidiaries, parent companies or even their banks).

Business transformation projects designed to improve processes and reduce risk tend to be led by IT or Change Management teams, where data is often an afterthought. Even where there is a focus on data management, the data governance is not rigorous and lacks structure, standards, stewardship and QA.

Typical vendor solutions for managing counterparty risk tend to be disproportionately expensive or take an “all or nothing” approach (i.e., enterprise solutions that favour a one-size-fits-all solution). Opportunities to secure incremental improvements are overlooked in favour of “big bang” outcomes.

Finally, solutions may already exist in-house, but it requires better deployment of available data and systems to realize the benefits (e.g., by getting the CRM to “talk to” the loan portfolio).

Opportunities for Fintech

The key lesson for fintech in managing counterparty risk is that more data, and more transparent data, should make it easier to identify potential problems. Since many fintech startups are taking advantage of better access to, and improved availability of, customer and transactional data to develop their risk-calculation algorithms, this should help them flag issues such as possible credit events before they arise.

Fintech startups are less hamstrung by legacy systems (e.g., some banks still run COBOL on their core systems), and can develop more flexible solutions that are better suited to the way customers interact with their banks. As an example, the proportion of customers who only transact via mobile banking is rapidly growing, which places different demands on banking infrastructure. More customers are expected to conduct all their other financial business (insurance, investing, financial planning, wealth management, superannuation) via mobile solutions that give them a consolidated view of their finances within a single point of access.

However, while all the additional “big data” coming from e-commerce, mobile banking, payment apps and digital wallets represents a valuable resource, if not used wisely, it’s just another data lake that is hard to fathom. The transactional and customer data still needs to be structured, tagged and identified so that it can be interpreted and analysed effectively.

The role of Legal Entity Identifiers in Counterparty Risk

In the case of Lehman Brothers, the challenge in working out which subsidiary was responsible for a specific debt in a particular jurisdiction was mainly due to the lack of formal identification of each legal entity that was party to a transaction. Simply knowing the counterparty was “Lehman” was not precise or accurate enough.

As a result of the GFC, financial markets and regulators agreed on the need for a standard system of unique identifiers for each and every market participant, regardless of their market roles. Hence the assignment of Legal Entity Identifiers (LEI) to all entities that engage in financial transactions, especially cross-border.

To date, nearly 400,000 LEIs have been issued globally by the national and regional Local Operating Units (LOU – for Australia, this is APIR). There is still a long way to go to assign LEIs to every legal entity that conducts any sort of financial transaction, because the use of LEIs has not yet been universally mandated, and is only a requirement for certain financial reporting purposes (for example, in Australia, in theory the identifier would be extended to all self-managed superannuation funds because they buy and sell securities, and they are subject to regulation and reporting requirements by the ATO).

The irony is that while LEIs are not yet universal, financial institutions are having to conduct more intensive and more frequent KYC, AML and CTF checks – something that would no doubt be a lot easier and a lot cheaper by reference to a standard counterparty identifier such as the LEI. Hopefully, an enterprising fintech startup is on the case.

Next week: Sharing the love – tips from #startup founders

Who’s making money from market data?

In recent years, market data vendors and their clients have been fixated on supporting the demand for low-latency feeds to support high-frequency, algorithmic and dark pool trading while simultaneously responding to the post-GFC regulatory environment. New regulations continue to place increased operating burdens and costs on market participants, with a current focus on know your customer (KYC), pre-trade analytics and benchmark transparency.

For banks and asset managers, the cost of managing data is now seen as big an issue as the cost of acquiring the data itself. Furthermore, the need to meet regulatory obligations at every stage of every client transaction is adding to operating expenses – costs which cannot easily be recovered, thereby diminishing previously healthy transactional margins.

I was in Hong Kong recently, and had the opportunity to attend the Asia Pacific Financial Information Conference, courtesy of FISD. This annual event, the largest of its kind in the region, brings together stock exchanges, data vendors and financial institutions. It has been a few years since I last attended this conference, so it was encouraging to see that delegate numbers have continued to grow, although of the many stock exchanges in the region, only a few had taken exhibition stands; and representation from among buy-side institutions and asset managers was still comparatively low. However, many major sell-side institutions and plenty of vendors were in attendance, along with a growing number of service providers across data networking, hosting and management.

Speaking to delegates, it was clear that there is a risk of regulation overload: not just the volume, but also the complexity and cost of compliance. Plus, it felt like that despite frequent industry consultation, there appears to be limited co-ordination between the various market regulators, resulting in overlap between jurisdictions and duplication across different regulatory functions. Are any of these regulations having the desired effect, or simply creating unforeseen outcomes?

One major post-GFC development has been the establishment of a common legal entity identifier (LEI) for issuers of securities and their counterparts. (This was in direct response to the Lehman collapse, as a result of a failure or inability to correctly and accurately identify counterparty risk in their trading portfolios, especially for derivatives such as credit default swaps.)  However, despite a coordinated international effort, a published standard for the common identifier, and a network of approved LEI issuers, progress in assigning LEIs has been slow (especially in Asia Pacific), and coverage does not reflect market depth. For example, one data manager estimated that of the 20,000 reportable entities that his bank deals with, only 5,000 had so far been assigned LEIs.

Financial institutions need to consume ever more market data, for more complex purposes, and at multiple stages of the securities trading life-cycle:

  • pre-trade analysis (especially to meet KYC obligations);
  • trade transaction (often using best execution forums);
  • post-trade confirmation, settlement and payment;
  • portfolio reconciliation;
  • asset valuation (and in the absence of mark-to-market pricing, meaning evaluated pricing, often requiring more than one independent source);
  • processing corporate actions (in a consistent and timely fashion, and taking account of different taxation rules);
  • financial reporting and accounting standards (local and global); and
  • a requirement to provide more transparency around benchmarks (and other underlying data used in the creation and administration of market indices, and in constructing investable products).

Yet with lower trading volumes and increased compliance costs, this inevitably means that operating margins are being squeezed. Which is likely having most impact on data vendors, since data is increasingly seen as a commodity, and the cost of acquiring new data sets has to be offset against both the on boarding and switching costs and the costs of moving data around to multiple users, applications and locations.

The overloaded data managers from the major financial institutions said they wished stock exchanges and vendors would adopt more common industry standards for data licensing and pricing. Which seems reasonable, until you hear the same data managers claim they each have their own particular requirements, and therefore a “one size fits all” approach won’t work for them. Besides, whereas in the past, data was either sold on an enterprise-wide basis, or on a per-user basis, now data usage is divided between:

  • human users and machine consumption;
  • full access versus non-display only;
  • internal and external use;
  • “as is” compared to derived applications; and
  • pre-trade and post-trade execution.

Oh, and then there’s the ongoing separation of real-time, intraday, end-of-day and static data.

This all raises the obvious question: if more data consumption does not necessarily mean better margins for data vendors (despite the need to use the same data for multiple purposes), who is making money from market data?

While the stock exchanges are the primary source of market data for listed equities and exchange-traded securities, pricing data for OTC securities and derivatives has to be sourced from dealers, inter-bank brokers, contributing traders and order confirmation platforms. The major data vendors have done a good job over the years of collecting, aggregating and distributing this data – but now, with a combination of cost pressures and advances in technology, new providers are offering to help clients to manage the sourcing, processing, transmission and delivery of data. One conference delegate commented that the next development will be in microbilling (i.e., pricing based on actual consumption of each data item by individual users for specific purposes) and suggested this was an opportunity for a disruptive newcomer.

Finally, other emerging developments included the use of social media in market sentiment analysis (e.g., for algo-based trading), data visualisation, and the deployment of dedicated apps to manage “big data” analytics.

Next week: Australia 3.0