Get in touch
INSIGHTS – PAGE HEADER – How Can We Achieve The Holy Grail Of Data Management? 1820X3754

How can we achieve the Holy Grail of data management?

For many years, firms have relied heavily on vast data warehouses. But how can they reach the fabled Holy Grail of gaining deeper data insight as well as better operational outcomes?

In recent times there has been a greater use of data lakes – a centralised repository designed to store, process, and secure large amounts of structured, semi-structured, and unstructured data – and now Snowflake – a platform that provides the benefits of data lakes and the advantages of data warehousing and cloud storage.

The real questions though are, what are the differences between them all, what challenges are they looking to solve and – perhaps most pertinently – are they worth the effort?

It was a subject that was broached at a recent Funds Europe webinar where Maxime Aerts, FE fundinfo’s Head of Asset Management Product Strategy and Managing Director (LU), joined other leading industry experts for a panel discussion.

Speaking on the webinar, Maxime said that “It is about not changing data warehouses to something new but improving the way that they work; it’s about making them more flexible and efficient in terms of usage and functionality, storage and access.”

It's a key consideration. Traditionally, the way is to have a static data warehouse and a sequential data management process. However, the new concept is to have a data-centric firm with data at the core and away from the input and output process. This, therefore, will streamline the current process, making it more dynamic.

But it is not change for change’s sake. There must be a business case.

Building a business case for centralised fund data management

So what might the business case be? Well, ensuring data quality has typically been a responsibility of back-office teams only, but this has changed rapidly – in no short part due to the impact that the Covid-19 pandemic had on ways of working.

“The lockdown aspect of the pandemic had an influence,” suggested Maxime to the webinar. “People weren’t in the same space so communication between people and the availability of people were compromised. This showed up the vulnerabilities of the traditional, sequential approach.”

It's not the only consideration and there are several other trends such as the ability to manage liquidity on a real-time basis, while environmental, social and governance (ESG) is another area where it is difficult to manage the data from the back-office. Added to this is the fact that regulators are much more demanding of data and expect firms to be more agile.

The demand for real-time error handling and data validation has grown exponentially

“Firms need to access real-time data” explained Maxime. “It is a change from the static data reports of the past to a more dynamic, real-time flow-of-data that is available to investors. But data consistency is the next challenge. For investors, that data must be comparable so there needs to be more work on standards for the data to be easily accessible and usable for investors. If you look at PRIIPs and ESG regulations, they show the importance of data consistency and standards.”

It leads onto the question of how business users can take control of data accuracy and quality through technology without becoming technologists or data experts.

Controlling fund data accuracy and quality

It’s an interesting consideration and the thought process leads to one pertinent question. How can firms harness the valuable asset they hold in their data to get deeper data insights while also achieving better operational outcomes?

“It is also about third-party data and meta-data,” concluded Maxime, “then using this information – along with internal data – to help product development and product governance with the goal of getting closer to the investor, which is, after all, the Holy Grail.”

---

Steven Kennedy, Senior PR Manager, FE fundinfo