Unlocking the Promise of Open Banking

What do Noise-Cancelling Headsets and Open Banking Have in Common?

Photo: Jason Pofahl, via Unsplash
Photo: Jason Pofahl, via Unsplash

In October 2015, the European Union adopted the Revised Payment Services Directive, or PSD2, ushering in one of the more ambitious, cross-border, regulation-driven introductions of Open Banking so far.

Five years on, as it adapts to the PSD2 mandated Open Banking framework, the European Financial Sector has generated mostly heat, lots of smoke, but only a precious few embers of light.

What to do?

Roots in Open Innovation

Professor Henry Chesbrough of the Haas School of Business at Berkeley, coined the term Open Innovation in a 2003 Harvard Business Review article advocating for a mindset toward innovation that emphasizes embracing external cooperation to solve the problems that arise from an increasingly complex world.

A couple of Open Innovation’s more successful implementations are Apple’s App Stores, arguably cornerstones on which the iPhone and iPad’s dominance in their respective categories rest. Another manifestation of Open Innovation is the ubiquity of public or Open API architectures, whereby, just as with the App Stores, external parties can build services and innovate on top of an established infrastructure or functionality.

Types of APIs
Types of APIs

Although the text of PSD2 itself is mostly preoccupied with the implementations of safety standards, such as Secure Customer Authentication, and other Partner API-related concerns (see graph), they are the pre-conditions for the good stuff in an Open Innovation-based approach: The requirement for allowing access to a customer’s transaction or payment accounts via Open API’s for Third-Party Providers.

PSD2, and the Promises of Open Banking

PSD2’s version of Open Banking promises enhanced user experiences, security, data sharing, and analytics, thanks to a thriving ecosystem innovating on top of a standardized, transaction account infrastructure provided by the incumbents.

An incumbent’s first instinct though, isn’t to share or standardize, unless it is others adopting its own standards. Standardization, as well as removing customer friction, would remove barriers to entry that prop-up the incumbents’ margins.

Instead, and despite paying lip service to the contrary, they are much more likely to seek to strengthen their barriers to entry, or moats, by keeping to their proprietary data formats, and stick to a minimum of external standardization and openness as mandated by law.

In the current regulatory environment in which the details of technical standards still aren’t mandated by law, a layer of aggregators has sprung into existence to provide such standardized, single point(s) of access. In the long term, the problems arising from the lack of standardization may be solved by this aggregation layer, but to date, and without further regulatory intervention, it looks like several waves of disruption or consolidation are still in store.

Thus, at present, a main blocker for the fulfillment of the promises of PSD2 is the fragmentation of the ecosystem resulting from an imperfect and incomplete standardization and sharing of data and functionality.

Alternative Approaches to Dealing with Imperfect Standardization

This incomplete sharing and standardization of data makes Open Banking implementations, at least on the incumbent side, which is where the volumes currently are, mainly compliance-driven, rather than by a drive to fulfill the full potential of Open Banking. This also makes the implementation of use cases that go beyond the bare minimum required by law highly complex undertakings.

One way organizations in the Financial Sector are contemplating to deal with this problem is to enrich their own transactional records, and what data is available via Open Banking, by integrating with other data sources, such as location, retail channel, social media, and myriad others.

These integrations are probably how a big-data world looks like in the long term. But in the short term, it’s an approach that deals with complexity by adding even more complexity. Never mind the technical, organizational, legal, and privacy challenges that will likely intensify.

If this sounds like the quandary your organization finds itself in, we would like to suggest an alternative approach. One that will yield returns faster, and sets you up in the right trajectory for organizational learning, while allowing you to start harvesting the low-hanging fruit, by leveraging the data that is already available and relatively reliable.

Starting with The Best Predictor of Future Behavior

Some of the more reliable data that PSD2 has made available is transactional or payment account data.

If as economists know, a single commodity’s price and its fluctuations communicate so much information that it can shape entire industries and clear markets. It then stands to reason that there’s a whole lot of information to unpack from payment transaction data alone.

In the aggregate, past behavior is the best predictor of future behavior, so single variable statistics arranged in a time series pack much more information than meets the eye. This is why autocorrelation, that is, looking at past behavior to predict future behavior, works.

Several members of our team have worked with Bluetooth headsets before joining the financial industry. We noticed how sound waves behaved very similarly to credit or debit card transaction information. So, we drew inspiration from how we programmed noise-cancelling headsets to apply autocorrelation algorithms to predict recurring unwanted noise to create the opposite sound wave that would cancel the unwanted noise.

Left: Current Account Balances, by the Author. Right: Noise Sound Waves. Image by Pete Linforth from Pixabay.
Left: Current Account Balances, by the Author. Right: Noise Sound Waves. Image by Pete Linforth from Pixabay.

Recurring financial events behave remarkably similarly to sound waves. The sound waves a headset emits to reproduce music or speech have an amplitude, a frequency and are subject to a level of noise. Recurrent financial events also have a frequency, and an amplitude in terms of monetary value. Some of these events can give us an insight into a borrower’s financial health or expenditure patterns, while others are merely noise.

Just as noise-cancelling headsets are programed with algorithms that predict unwanted recurring noise to cancel it, so too can our algorithm predict and classify recurring events in a transaction account.

As our dataset grows, it has allowed us to predict with increasing confidence, if a transaction is, among other things, a recurring monthly salary, a quarterly dividend, a monthly mortgage expense, a weekly grocery run, a yearly vacation, a regular lunch near the office, or an emergency expense, only by looking at the amount of the transaction, and the point of time when it took place.

For the base algorithm, we chose to only look at transaction amount and time because it is the most robust and reliable piece of data available.

The description field of the transaction is usually a bit less reliable, as very often the same recurring transaction can have a different description at different times, depending on various circumstances not recorded in the PSD2 data. So, we have a separate algorithm that gives us the probability of the description of a transaction being similar to another description, based on distance analysis, which we then use to enhance the predictive value of our first algorithm.

And most importantly for banks, our algorithms also allow us to predict if our loan-taker may, for example, miss a mortgage or other loan payment.

Notice how we can glean all this useful information, and more, using only the transactional data that is already available within either a financial institution’s internal transactional data, or through a PSD2 API.

Start and Harvest the Benefits of Open Banking Already Now

Sure, you can further enrich this data with retailer information, social media, location, etc. And yes, it will probably improve the predictive value of your model.

However, if you are in the same boat as most of the organizations we have been talking to. Then this approach we shared is much easier to start to implement. Has a quicker time to value. And it should have plenty of runway before the many privacy, technical, and organizational concerns needed to enable the integrations to other sources of rich data can be addressed to the satisfaction of the many compliance and privacy concerns they raise.

Article first published in Medium.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: