Xmon Insights – Optimization of Pricing Data Acquisitions

In our first article in the series of Xmon Insights use cases, we take a look at how Xmon’s security level feature helps you optimise pricing data acquisitions. To refresh your memory on Xmon Insights service, follow this link to one of our previous articles on this topic.

As is the case for some data vendors, when pricing data is requested at different times of the day it triggers different snapshots. Each different snapshot has a significant cost impact. As a consequence, if price is required only for the end of day P&L or Books and records generation, then a single snapshot should be downloaded per day for each security, as multiple price updates throughout the day are not necessary.

In order to review the pricing snapshot usage and optimise the cost according to the vendor pricing rate card, the organisation needs a report providing security level details. Xmon is able to address this need and provides a report available to be generated and downloaded on demand. This report will display security ID details with the list of all the pricing snapshots used within the organisation, and users can review and confirm with business if some snapshots can be turned off for those securities. Once security ID’s with redundant snapshots have been identified, then Xmon is able to provide the list of requests that contain those securities to be amended and optimised. 

With Xmon’s Insights guidance, many of our clients, such as asset managers and hedge funds, achieved 10-15% savings of their total reference data consumption. If all of the above sounds of interest to you, then get in touch, we would love to hear from you!


Xmon: Managing Reference Data Flowing Within The Organisation

Managing reference data with confidence has become increasingly important in today’s data-driven business environment. Effective reference data management ensures that businesses can make informed decisions, comply with regulatory requirements, and manage risk. It is important to track reference data coming from external sources, i.e. data vendors, into the organisation, as well as tracking data moving between systems within the organisation. Tracking reference data inside an organisation can be a daunting task, especially when it involves multiple systems and data sources formats and mappings. This is where Xmon can help!

How Xmon Can Help

1. Customizable Mappings for Securities and Data Attributes

One of the key features of Xmon is its customizable mappings for securities and data attributes. This feature enables businesses to map their own internal security references and data attributes, ensuring that their data is accurately represented, classified, and organised. This feature allows tracking reference data points within the organisation, even if they are modified or mapped on their journey from source to consumer.

2. Customizable Cost Models

Xmon also includes customizable cost models. This feature enables businesses that wish to do so, to track the costs associated with their reference data as it flows within the organisation, allowing them to monetize and allocate spend internally in a fair and transparent way.

3. Customizable Data Formats

Whether data is distributed on queues, in files or other formats such as JSON or XML, Xmon provides integration adapters that allows the ingestion of data requests in their native formats for tracking within the organisation.

The Importance of Tracking Reference Data Within The Organisation

1. Transparency of Usage and Accountability

Tracking reference data inside an organisation is important for several reasons. Firstly, it promotes transparency of usage and accountability. By tracking reference data, businesses can have a better understanding of who is using what type of data in the organisation and enable transparency of spend and reporting. This promotes accountability and ensures that reference data is being used effectively and in the most cost efficient way possible.

2. Data Lineage

Secondly, tracking reference data enables data lineage. Data lineage is important because it enables businesses to trace the origin and transformation of their reference data. With data lineage, businesses can ensure that their reference data is accurate and reliable, and that it meets the specific needs of their business.

3. Internal Cost Allocation

Finally, tracking reference data enables internal cost allocation. By tracking reference data, businesses can ensure that their data management costs are allocated appropriately across their organisation. This ensures that data management costs are optimised, and that reference data is managed in a cost-effective manner.

Client Use Case: A Large Asset Management Firm

A global asset management firm had been using Xmon to track data requests from their central Enterprise Data Management (EDM) system to external vendors for several months, reducing costs and improving transparency. Following the successful deployment on the external side, the client then moved on to leverage Xmon internally to track reference data between the EDM system and ultimate internal consumers.

By using Xmon’s flexible data mapping and realtime tracking functionality, the client optimised spend by identifying data points that were acquired but never used, removing them from vendor calls and started allocated costs fairly to internal data consumers based on actual usage. This resulted in improved transparency, better usage reporting for compliance and audit purposes and a further cost reduction of 8% on overall data spend.

Xmon empowered this client to track reference data effectively both internally and externally, leading to decreased costs, enhanced transparency, and fairer cost allocation.

Conclusion: Managing Internal Reference Data flows with Xmon

Managing reference data with Xmon enables businesses to track their reference data with ease, ensuring that data it is accurate, consistent, and reliable. With its customizable mappings for securities and data attributes and customizable cost models, Xmon promotes transparency of usage and accountability, enables data lineage, and facilitates internal cost allocation. Ultimately, Xmon helps businesses improve their operational efficiency, enables transparency and accountability in the organisation and allows stakeholders to make better decisions based on reliable and consistent data.

If you’d like to know more about how Xmon can help you track reference data inside your organisation, don’t hesitate to reach out by clicking here.

Xplore: Searching for Datasets

In this fourth issue of our series of articles about Xplore, our Data Discovery and Metadata Management platform, we zoom in on searching for datasets. In our first article, we looked at Data Discovery in Xplore and its instrumental role in uncovering the different data assets in the firm and making existing datasets available to as many data consumers as possible. In the second article, we shed light on Data Classification in Xplore and explained how fundamental it is in organising the firm’s data, making it more efficient to search and navigate and reducing redundancies and inconsistencies which would otherwise compromise data quality. And in the third article, we went on to discuss Metadata Management in Xplore and how it makes datasets more discoverable and allows users to assess if the dataset is relevant for their needs before they access the data.

Searching for your data does not have to be like looking for a needle in a haystack.

Employees spend valuable time looking for the data they need. When this data is scattered in different departments and segregated in silos, searching for it is very inefficient and sometimes ineffective. It is almost like looking for a needle in haystack! With Xplore – through data discovery, data classification and metadata management – all your datasets (actually the metadata describing them rather than their underlying data) are organised in a central inventory. Searching this data catalogue is guaranteed to help you locate that dataset you are after!

How does Xplore facilitate searching for datasets?

When data consumers log in to Xplore, they are presented with a customisable homepage called MyDataSpace. The most prominent feature of this page is the search box, where users can search for datasets. They may type one or more keywords, which is particularly useful when users are searching for specific dataset names or column names. This lexical search is enhanced by its ability to cater for misspellings and to support matching wildcard patterns and regular expressions. Xplore’s search engine also supports full-text searches which allow users to find matches within dataset descriptions, notes and other free-text metadata fields.

Consider a specific example to illustrate the effectiveness of Xplore’s search functionality. In financial services, reference data is critical for financial operations and a common reference dataset is the security master, which contains information on securities such as bonds, equities and derivatives. With Xplore’s data search functionality, a user can simply type “country of risk” into the search box and locate the dataset containing the reference data for this field.

Searching for Data in Xplore

Searching for data is an interesting and constantly evolving area and we are super excited about continuously enhancing our search engine in Xplore. Stay tuned to read about our filters and facets for searching datasets as well as the other functionalities and features of Xplore in subsequent articles. In the meantime, if you would like to enquire about Xplore or request a product demo, please contact us. We would love to hear from you!

How To – Use Field Level Analysis To Optimise Category Cost


For information about Internal Reference Data tracking and optimising spend – click here

In our second article in the How To series we focus on how you can utilise Xmon with field level analysis to optimise your cost further.

When downloading reference data from third party vendors you will be charged for each billable category, which you are requesting. A billable category is charged as soon as a single field from that category is requested. As a consequence, it is crucial to know what is requested in your organisation across all systems and environments at a field level. Such a granularity could be challenging and time consuming to report month after month, unless you are equipped with a “smart meter” for reference data, such as Xmon.

Xmon is able to analyse the traffic of reference data downloads made across all systems and environments of your organisation. Our advanced reporting engine consolidates all requests for a given calendar month and provides a distinct list of fields requested by category and provides a cost estimate for each request, as well as aggregated cost indicators.

Equipped with this analysis, Xmon empowers the data management team to identify expensive categories, which are triggered by just a few fields, as well as challenge the business to review the necessity of retrieving this data. Our clients were able to reduce their overall reference data cost between 10% to 15% on multiple separate occasions, by only turning off fields that were no longer used in downstream systems.

If the data is required by the business, the review is usually followed by validating the scope for those fields by asset type, as indeed not all fields may be needed across all asset types. The approach of “one-size-fits-all” is quick to implement but more expensive to run, resulting in spiralling and uncontrolled spend. In one of client use cases, we guided a large asset manager to split data acquisition requests by asset type and only select fields relevant to each asset class, generating a cost saving of around 22%.

Once the above optimisations have been performed, we are able to focus on investigating alternative commercial models within the same vendor. Depending on the volume of data requested, one pricing model might be more optimal than the other. In these cases, Xmon is able to provide out-of-the-box simulations comparing pay-as-you-go pricing, band pricing, bulk data subscriptions and enterprise models. Helping you to find the most optimal pricing model could generate significant savings up to 33%, as was the case with one of our large insurance clients.

If you are looking for help understanding and optimising your reference data spend, get in touch!

Xplore: Metadata Management


In this third issue of a series of articles about Xplore, our Data Discovery and Metadata Management platform, we focus on metadata management. In our first article, we cast the light on Data Discovery in Xplore explaining how essential it is for finding the different data assets in the firm and exploiting existing datasets for use by as many data consumers as possible. In the second article, we looked at Data Classification in Xplore and its pivotal role in organising the firm’s data, making it more efficient to search and navigate and reducing redundancies and inconsistencies which would otherwise compromise data quality.

What is metadata and why is it important?

Simply put, metadata is data about data. It is information that describes various aspects of your data, but not its content. If you compare your dataset to a book, the metadata of your dataset would include the book title, the author’s name, the ISBN, the publication date, a description of the book and possibly some user reviews! Metadata, when managed properly, makes your dataset more discoverable and helps users assess if the dataset is relevant for their needs even before they access the data itself. Well-managed metadata also promotes data quality, integrity and reliability.

How does Xplore facilitate metadata management?

The journey of metadata in Xplore starts at data discovery, where crawlers discover the DataSets on a DataStore and pull their metadata into Xplore. The metadata captured by crawlers is the technical metadata and it includes the name of the dataset, its type, its location and the schema of the data (e.g. the column names and types in case of a .csv file). In addition to the technical metadata, data managers and, in some cases data consumers, can add a layer of user-added metadata to describe the data further. This could comprise a user-friendly name for the dataset, a description about the data, additional notes or even user reviews. This supplementary layer of user-defined metadata captures the human knowledge that would otherwise remain undocumented. Tags can also be attached to the dataset to make searching and filtering more efficient. Remember, the richer the metadata, the better described the dataset is and the more likely it is to be picked up in searches and the more useful the metadata would be for users browsing the data. By consulting the metadata (be it the schema of the dataset or some notes left by a user), data consumers can evaluate the usefulness of datasets and thus find what they need more quickly and more efficiently.


Different Types of Metadata in Xplore


Because user-defined metadata can vary with the context of the data and the needs of a firm, Xplore allows data managers to enrich the metadata model with custom metadata fields tailored for their specific use cases. For instance, you could add a Security Level field to reflect the level of confidentiality or security assigned to a dataset or an Intellectual Property Constraint field that dictates the extent to which a dataset can be used in a compliant way. Custom metadata fields can be configured as free-text fields, drop-down lists, date fields or checkboxes.


Configuring a Custom Metadata Dropdown Field in Xplore


Stay tuned to find out more about the other functionalities and features of Xplore in subsequent articles. In the meantime, if you would like to enquire about Xplore or request a product demo, please contact us. We would love to hear from you!


Introducing Xmon’s new Cost Optimisation feature!

We are excited to announce that our new, long awaited Cost Optimisation feature page is now live.

Delivering further functionality and quality enhancements in one of Xmon’s core qualities – Cost Optimisation, Xmon now instantly analyses your usage and spend to provide automated optimisation recommendations. This new feature scans your consumption and provides an automatic and intelligent summary of actionable insights, along with their expected savings.

Automatic summary analysis of all recommendations will display for the current month, highlighting total potential savings. These are then broken down and summarised by sections in a clear and easy to understand manner and starting with the biggest potential savings first.

To access this feature, simply navigate to the REPORTS menu and select COST OPTIMISATION:

Amongst other recommendations, Cost per fields section will help you identify categories triggered by just a few fields, adding great value to optimisation discussion in your enterprise. Additionally, clicking on the DATA USAGE SUMMARY hyperlink will take you to our existing page with additional detailed analysis at field and request level, helping to drive your costs down further.

Multi-Hit cost optimisation, topic of our last article, shows you potential savings for static categories and inefficient data requests. In addition, we have added automated percentage allocation of non-production related data request costs.

At Xpansion, we understand the importance of cost optimisation for your business. Our Cost Optimisation feature provides a comprehensive and easy-to-use solution that can help you identify potential saving opportunities and take measures to manage your costs more effectively and proactively.

Try out the functionality for yourself, or contact us today to learn more about our Cost Optimisation feature and how it can help your business save money and increase efficiency!

Xplore: Data Classification

In this second issue of our series of articles about Xplore, we look at Data Classification. As you know by now, Xplore is our Data Discovery and Metadata Management platform. In our first article, we shed light on Data Discovery in Xplore and showed how it helps uncover the different data assets in the firm and leverage existing datasets for use by as many data consumers as possible.

What is data classification?

Generally speaking, data classification is the process of organising, tagging and inventorying data assets to make searching for, and navigating through, them more efficient. As well as providing a relevant and meaningful data catalogue to browse and search, data classification helps eliminate redundancies and reconcile inconsistencies, both of which threaten to compromise the quality of data.

How does Xplore facilitate data classification?

Xplore enables firms to build a Data Classification tailored to their business context, providing meaningful organisation and navigation of their datasets.  A flexible, user-friendly tool allows data stewards to construct an inventory for their datasets. You can create a hierarchical tree structure with the levels and sub-levels you require, and you can then add your DataSets at the relevant point.

xplore data classification

Xplore also promotes the use of tagging to improve categorisation and make searches more effective. The benefits of tagging data cannot be overrated, and there are numerous examples of practical use cases in which data tagging is instrumental, such as identifying sensitive personal data and improving the searchability of unstructured data.

xplore data tagging

Stay tuned to discover more about the other functionalities and features of Xplore in our upcoming articles. The next issue in this series will focus on metadata management. Until then, if you would like to enquire about Xplore or request a product demo, please contact us. We would love to connect with you!

Xmon multihit cost

How to: Drill down and mitigate multi-hit cost

What are multi-hit costs?

Multi-hit cost is a feature of certain reference data pricing models and is incurred when your organisation requests the same billable category multiple times on the same day for the same securities. Multi-hit costs can creep up and become a significant part of your monthly reference data invoice. As a matter of fact, in 2022 we observed that on average, around 18% of reference data spend was attributed to multi-hit costs! These costs are often an issue which is brushed aside in organisations due to a lack of cost transparency and understanding of the data pricing models, but Xmon is designed to help you overcome this inefficiency and make you pay only for what you need.


What can you do to understand and mitigate reference data multi-hit costs with Xmon?


Firstly, it is important to quantify the amount of your monthly reference data invoice that is attributed to multi-hit cost and understand the scale of potential optimisations to be realised. Using Xmon, you can view the total multi-hit cost relative to the total invoice amount and its monthly evolution using the offline analysis. Such analytics can be reviewed in the ‘Cost & Usage Explorer’ BI reporting tool under the ‘Invoice time series’ dashboard.

If you are using Xmon active integration and tracking every single request flowing out of your organisation in real-time, multi-hit cost breakdowns can be found in the ‘Heat Map’ dashboard of the ‘Cost & Usage Explorer’ report menu.

The next step is to break down the multi-hit cost by billable category and focus on static data categories for which data does not change intraday such as security definition, corporate action. For such categories, there is no reason to request the same data multiple times on the same day as the return values will not have changed.

Once the high-level analysis has been performed, you will be able to drill down at the request level in order to provide transparency on the origin of the multi-hit cost and deduce clear recommendations for system owners to mitigate such extra expense. Using the offline analysis of Xmon, this is easily done by downloading standard report 31 – Detailed itemised report by Filename. This report gives you the granularity to filter on a particular category or security type and counts how many securities are contributing to the multi-hit cost for each request. Grouping by request date helps you to identify the origin of the day when multi-hit cost changed or spiked. You can also filter for a specific day and group by filename to obtain your top 5 jobs that contribute to your multi-hit cost.

If Xmon is integrated in active or passive mode, you can go one step further and review details of each large request as identified above and review field level usage. This will help you to spot fields which need to be moved between jobs/requests to minimise or eradicate the use of static categories in intraday requests and group them in a single, daily request.

Note that it might be possible to optimise further, by splitting daily static category requests to only request fields relevant for each asset class. This will be the topic of a future article in the “How to” series, so stay tuned.


Speak to one of our experts today

If you are looking to optimise your market data spend even further, do not hesitate to get in touch!






2022: A year in review

Get a quick overview of reference data statistics and usage metrics for 2022 with our informative infographic.

#2022 #infographic”


xmon infographic 2022

Importance of Data Discovery and Metadata management

Xplore: Whitepaper: The Importance of Data Discovery and Metadata Management

In this paper, we look at some of the problems that firms face with finding, classifying and exploiting their datasets. We then look at data discovery as a solution and we present Xplore, our platform for data discovery and metadata management.

Topics covered:
  • Data management challenges
  • How do firms address these challenges?
  • How to increase the value of a data discovery platform?
  • Xplore: A practical, simple and powerful data discovery platform


Click here to download the paper.