Reference Data Management in the Modern Era

For too long market data management boiled down, for the most part, to the timely payment of invoices. With data usage (and costs) steadily increasing and the proliferation of data sources, data types and data vendors as well as the diversification of data consumers (hear data scientists), market data management can no longer be restricted to a reactive, administrative role.

Proactive usage tracking

Managing data effectively, is more fundamental than plotting historical cost graphs and allocating costs based on arbitrary, static formulas. Effective data management requires a proactive understanding of:

  1. Who data consumers are
  2. How are data consumers requesting data and from which data source is the request going to
  3. What such data is being used for
  4. Does the data cost justify its use

In addition to the above, market data management is not only about understanding usage and cost, market data management must include the ability to proactively control market data usage by:

  1. Restricting access to data for individual consumers based on their needs
  2. Controlling which data points (securities, fields) are available to consumers and sometimes enforcing data consumption policies
  3. Allocating costs back to consumers based on fair, factual data consumption analytics
  4. Ensuring data usage is compliant to vendor agreements as well as internal and external policies and regulations
  5. Generating market data usage intelligence through Market data usage reporting

Beyond cost efficiencies

Although cost optimisation is a central focus is any market data management policy, it is by far not the only one. As data usage becomes more complex and more commoditised, tracking data within the firm becomes a necessity to respond to ever increasing regulation and compliance considerations. Data tracking, also known as data lineage is one of the most debated topics in modern market data management circles. Data lineage requires a clear understanding of detailed usage and a factual audit trail that can be revisited on-demand, sometimes weeks or months after the fact.

XMon: Market Data Usage: Under Control

XMon is a fully managed reference data usage analysis and business intelligence plaform. XMon centralizes reference data requests into one data management interface where individual consumers are uniquely identified and data requests are tracked and can be controlled. This provides unparalleled levels of detail for market data usage reporting, ensures cost efficiencies are achieved and market data governance is met.

Reach out to us for more information, we’d love to chat.




Try our online data cost simulator

We’ve just released a online version of a Bloomberg data license per-security request cost simulator. The simulator uses the XMon RestAPI to calculate the cost of a given data request and returns a breakdown by asset class and data category. Use one of the provided samples or copy-paste your own BBG per-security request to try it out and get an estimate of your request cost.

The tool is available here and is totally free to use, go ahead and give it a spin!

The simulator calculates the initial hit cost of a data request and does not take into account multi-hits. Contact us for a full-featured trial of XMon, which includes multi-hit support, advanced business intelligence and cost optimisation analytics. All XMon customers also have access to secure RestAPI endpoints which provide advanced integration capabilities and the ability to programmatically integrate to the XMon engine.

Reach out for a full demo of the product and a discussion on how XMon could help you get back control of your data spend.



Open Collaboration & Innovation Meetup

October 18, 2018

We’ll be in Paris on the 18th of October 2018 for the Finastra Innovation Meetup. We’ll be talking about innovation, collaboration and the integration of XMon to We look forward to an exciting and inspiring session, if you’re around come and say hello!

Click here to register:


[Video] – Data Usage Analysis Report

XMon Static Data – Data Usage Analysis

The Data Usage Analysis report is available on-demand from the XMon Static Data interface and quantifies the cost of duplicate data requests and identifies cost saving opportunities by analyzing individual field usage and their attribution to the overall invoice amount. Watch the short video below to see it in action…

[wonderplugin_video videotype=”mp4″ mp4=”″ videowidth=600 videoheight=400 keepaspectratio=1 videocss=”position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;” playbutton=””]

XMon Static Data – Choosing your integration mode

XMon Static Data provides real-time analytics, transparency and control of static data flows. Using XMon, our customers have made significant savings and established unprecendented levels of transparency and accountability over data usage.

Within a firm, XMon can integrate at three levels:

  1. Data entering the firm: through third party data providers, such as Bloomberg, Reuters and others
  2. Data flowing within the firm: internal application calls and internal data flows
  3. Data exiting the firm: in the form of customer reports or other analytics

Although customer requirements vary greatly for points 2 and 3, point 1 above is generally a standard interface to the leading data vendors: Bloomberg (through their Data License offering) and Reuters (over DatascopeSelect).  In this post, we’ll describe the options customers have when monitoring data entering the firm.

Should we be Active?

In order for XMon to analyse requests, data request files must be sent in for analysis. For this purpose XMon provides two integration modes: Active and Passive, otherwise known as Proxy and non-Proxy connectivity.

Active monitoring, or ‘proxy’ mode

In this mode, XMon acts as a reference data proxy; in other words, requests are forwarded to XMon for processing and then XMon forwards requests to data vendors provided they have not breached any user defined rule. This is – by far – the simplest connection mode. As systems connect to data providers, using the active monitoring mode is merely a re-direction of calls to XMon (generally a minor configuration change in the system is required) and generally applications are good to go: all requests sent through XMon will appear automatically in the dashboard and a new era of transparency will begin. It is however not always feasible (or desireable) to connect data consumers in active monitoring mode. Certain institutions have prohibitive policies which make the active monitoring mode impractical in some cases. For these cases, the passive monitoring mode can be used. We’ll describe the passive monitoring mode in the section below, but first a quick summary of pros/cons of the active monitoring mode:

Psst… Want to know XMon’s availability statistics? We provide public uptime metrics!

Passive monitoring, or ‘non-proxy’ mode

In this mode, XMon receives a copy of data requests and does not forward these to the vendors. In passive monitoring mode, XMon merely gets a file copy or ‘parallel’ call and records it for analytics, reporting and governance.This mode requires that customers have, or keep a copy of all data requests.

So, which mode should you choose?

96% of all requests sent to XMon are sent using Active monitoring. (Click here for more stats!).

This being said, both modes can be used on the same set up! A general approach is to have all development / test / UAT / Staging etc. environments connect through the active monitoring mode and production systems connect through Passive monitoring mode. This provides the advantages of control and live alerting on consumers that may run astray and flexibility and transparency on production consumers.

Want to know more?

Do reach out! And let’s get your data usage under control.

Bloomberg Scheduled Mode: Will it reduce your spend?

Bloomberg have introduced, in addition to the two delivery modes of Bulk and Ad Hoc, a new commercial model for static data, referred to as Scheduled. Similar to the Ad Hoc mode, the Scheduled mode is a request-response service provided to offer optimised access to large data requests that do not frequently change. In practice, the Scheduled mode is a variation of the Ad Hoc mode with a fifteen minute minimal request scheduling time and a different commercial model. This means that Bloomberg require that a request be placed at least fifteen minutes prior to the response being needed. The commercial model for Scheduled mode is based on pre-agreed quotas per asset class and data category.

Clients can of course, have both modes (in addition to the Bulk mode) and use them in ways they see fit and within the constraints imposed by the data vendor for costs and response times.

We have been approached by several customers wishing to understand the impact of using the Scheduled mode on their data costs. As XMon models the Scheduled commercials, the simulation is executed in seconds within the engine.

XMon provides a hassle-free, detailed analysis of the Scheduled mode at the click of a button

The analysis of the Scheduled mode is done through our predefined report STD18 which runs a what-if analysis for this commercial model for any time period. In seconds, customers are able to see whether they would spend more, or less per data category and of course, a tallied-up global saving or loss.

The report extract below is an example showing an overall saving of 18% over the Ad Hoc mode for this example customer.

XMon Bloomberg Scheduled Mode Simulation

As mentioned above, the model requires technical changes as well so the move may involve changes to underlying application calls.

XMon provides a wealth of functionality to simulate, control and understand data costs. Reach out to us for more information about the Bloomberg Scheduled mode or to simulate this delivery on your data costs.


XMon Infographic

Check out our new infographic with data telemetry statistics…

XMon Data Telemetry Infographic

Working with XMon Tags: What is data being used for?

XMon creates an abstraction layer between data vendors, internal systems, end-users and applications. This is achieved through modeling ‘Data Sources’ (vendor accounts) and ‘Data Connectors’ (internal consumers of data) and provides visibility onto which system, end-user or application is pulling down data. The abstraction layer allows for granular flow control and accurate and fair cost allocation reports to be created quickly and easily.

Internal consumers can request and download data for different reasons from within the same application. For example, data may be downloaded from a portfolio management system for the following reasons:

  • Building the instrument universe
  • Obtaining data required to build a given report
  • Updating data intraday
  • Getting closing prices

And much more.

These ‘data download requests’ may be done through manual interaction or through automated batch jobs and in addition to this, multiple users may be logging on to the same back end system or application, effectively ‘masking’ the real consumer and purpose of the data.

Where Data Connectors in XMon provide visibility onto the application pulling down data, XMon Tags provide visibility onto why this data is being pulled down.

How to define and use tags in XMon

Tags are defined in XMon in the SETTINGS > TAGS menu.

The screenshot below shows two tags defined: ‘JOBNAME’ which will be used to identify batch jobs requesting data and ‘USERNAME’ which will be used to identify individual users pulling down data.


Once tags are defined in XMon, they can be added to the request file itself in the form of a comment. Below is a sample Bloomberg Data License SFTP request file with an embedded tag:


When a request containing a defined tag is sent to XMon, it is visible immediately as a neatly defined column in the dashboard where it can be filtered on and sorted:

Tags are applied across applications, so for example, if a job requires data to be pulled down from different applications or across different teams, they can share the same tag and costs will be automatically aggregated.

Tags are also available in XMon reports for BI and cost allocation purposes (e.g. what is the data cost for BA Research ?), and export charts like the one below:



Tags are supported for Bloomberg and Reuters and can be added in SFTP and WebAPI type requests.

Get in touch for more information about XMon Tags and how they can be used to provide fine grained visibility on data flows and costs within your enterprise.


We’ve moved!

We have moved to Victoria!

Our new address is:

Audley House
13 Palace Street

Our phone numbers and all other contacts remain the same.



XMon new release 1.3

We’ve just released XMon version 1.3, a major update to the previous release. This version brings new features, extends existing functionality of XMon and improves end user experience. Version 1.3 was made available over the weekend as a seamless client update.

New Features

Bloomberg bill adjustments and discount factors

Clients are now able to adjust invoices using agreed discount factors or static adjustments on invoices. Bespoke customer agreements can now be modelled in XMon for real-time calculations and customers can also run simulations to see the effects of potential discounts on invoices and spending.

Xmon discount factor screenshot

XMon discount factor user input screen



XMon now supports Bloomberg BVAL type requests with full calculation of prices and categorisation of requests.

OpenFIGI integration

OpenFIGI provides an open symbology repository to facilitate instrument identification over all global asset classes. XPansion is now an OpenFIGI facilitator and provides native integration with the OpenFIGI API. Through this integration, our customers are now able to identify instruments across references and perform more detailed and precise reports within the system. For more information about OpenFIGI and facilitators see:

List of OpenFIGI facilitators


Feature enhancements


We’ve greatly improved our Alerts engine in version 1.3. Clients can now subscribe to more types of real-time alerts and have the ability to filter and look for past alerts more easily.

New pricing matrix

We have extended our Bloomberg real-time pricing engine to include the latest iteration of the Bloomberg per-security pricing matrix. This involves changes in how Credit Risk instruments are priced and categorised and changes to prices for other categories as well. Clients who are on this new model can now configure XMon to calculate their invoices using the new prices by modifying their contract settings in XMon.

OpenAPI extensions

We have increased the scope of our OpenAPI methods to improve integration with client systems and in-house reporting dashboards.

As always, we’d love to hear from you, do reach out if you need more information about this release or for any other enquiry about the product.