Xplore: Data Classification

In this second issue of our series of articles about Xplore, we look at Data Classification. As you know by now, Xplore is our Data Discovery and Metadata Management platform. In our first article, we shed light on Data Discovery in Xplore and showed how it helps uncover the different data assets in the firm and leverage existing datasets for use by as many data consumers as possible.

What is data classification?

Generally speaking, data classification is the process of organising, tagging and inventorying data assets to make searching for, and navigating through, them more efficient. As well as providing a relevant and meaningful data catalogue to browse and search, data classification helps eliminate redundancies and reconcile inconsistencies, both of which threaten to compromise the quality of data.

How does Xplore facilitate data classification?

Xplore enables firms to build a Data Classification tailored to their business context, providing meaningful organisation and navigation of their datasets.  A flexible, user-friendly tool allows data stewards to construct an inventory for their datasets. You can create a hierarchical tree structure with the levels and sub-levels you require, and you can then add your DataSets at the relevant point.

xplore data classification

Xplore also promotes the use of tagging to improve categorisation and make searches more effective. The benefits of tagging data cannot be overrated, and there are numerous examples of practical use cases in which data tagging is instrumental, such as identifying sensitive personal data and improving the searchability of unstructured data.

xplore data tagging

Stay tuned to discover more about the other functionalities and features of Xplore in our upcoming articles. The next issue in this series will focus on metadata management. Until then, if you would like to enquire about Xplore or request a product demo, please contact us. We would love to connect with you!

Xmon multihit cost

How to: Drill down and mitigate multi-hit cost

What are multi-hit costs?

Multi-hit cost is a feature of certain reference data pricing models and is incurred when your organisation requests the same billable category multiple times on the same day for the same securities. Multi-hit costs can creep up and become a significant part of your monthly reference data invoice. As a matter of fact, in 2022 we observed that on average, around 18% of reference data spend was attributed to multi-hit costs! These costs are often an issue which is brushed aside in organisations due to a lack of cost transparency and understanding of the data pricing models, but Xmon is designed to help you overcome this inefficiency and make you pay only for what you need.

 

What can you do to understand and mitigate reference data multi-hit costs with Xmon?

 

Firstly, it is important to quantify the amount of your monthly reference data invoice that is attributed to multi-hit cost and understand the scale of potential optimisations to be realised. Using Xmon, you can view the total multi-hit cost relative to the total invoice amount and its monthly evolution using the offline analysis. Such analytics can be reviewed in the ‘Cost & Usage Explorer’ BI reporting tool under the ‘Invoice time series’ dashboard.

If you are using Xmon active integration and tracking every single request flowing out of your organisation in real-time, multi-hit cost breakdowns can be found in the ‘Heat Map’ dashboard of the ‘Cost & Usage Explorer’ report menu.

The next step is to break down the multi-hit cost by billable category and focus on static data categories for which data does not change intraday such as security definition, corporate action. For such categories, there is no reason to request the same data multiple times on the same day as the return values will not have changed.

Once the high-level analysis has been performed, you will be able to drill down at the request level in order to provide transparency on the origin of the multi-hit cost and deduce clear recommendations for system owners to mitigate such extra expense. Using the offline analysis of Xmon, this is easily done by downloading standard report 31 – Detailed itemised report by Filename. This report gives you the granularity to filter on a particular category or security type and counts how many securities are contributing to the multi-hit cost for each request. Grouping by request date helps you to identify the origin of the day when multi-hit cost changed or spiked. You can also filter for a specific day and group by filename to obtain your top 5 jobs that contribute to your multi-hit cost.

If Xmon is integrated in active or passive mode, you can go one step further and review details of each large request as identified above and review field level usage. This will help you to spot fields which need to be moved between jobs/requests to minimise or eradicate the use of static categories in intraday requests and group them in a single, daily request.

Note that it might be possible to optimise further, by splitting daily static category requests to only request fields relevant for each asset class. This will be the topic of a future article in the “How to” series, so stay tuned.

 

Speak to one of our experts today

If you are looking to optimise your market data spend even further, do not hesitate to get in touch!