Dodd-Frank. EMIR. FATCA. AIFMD. Solvency II.
The reporting obligations of fund managers just keep growing. The snowballing burden — with new regulations and reporting rules popping up at an alarming pace — is forcing many to step back from their usual resigned willingness to learn and follow one regulatory rule after another.
The emerging alternative: looking at the bigger picture of how all this reporting is affecting their businesses and what they can do about it.
Individually the reporting for each regulation may present a complex challenge, but a doable one. Combined, they are generating spiraling costs in specialized staffing and technology requirements and operational chaos in terms of finding and managing the data involved. Like other sectors in the heavily regulated financial services world, fund managers have historically responded to the new reporting requirements by just doggedly plowing through the additional work, filling out reports day in and day out. They know that securities watchdogs won’t hesitate to fine them if they are anything less than punctual and entirely honest. Of course, if investors find out they’re under regulatory investigation, they might as well shut their doors.
But the sheer number of these obligations and the clear message from the regulators that they can and will demand information from every corner of their business is forcing fund managers to reconsider some things they have taken for granted — like the way they handle data internally and whether making reporting more efficient is worth some unpopular decisions.
Against a background of grumbling about whether the regulators can actually digest all the data they are collecting, fund managers are trying to figure out just where the data is located, how to aggregate it, how to ensure its accuracy, how to format it, and then how to deliver it. In doing so, it is becoming really clear that they are often performing the exact same tasks over and over again through different teams of specialists, each dedicated to a particular regulation. It’s as though each piece of legislation is being treated as a separate client each requiring its own dedicated staff. And that doing this is beginning to look a little insane.
“Without a repeatable process driven by a technological engine, fund managers will find that regulatory reporting is tying up valuable internal resources and preventing them from conducting their day jobs,” points out Gary Kaminsky, managing director for global regulatory and compliance issues at regulatory reporting and risk management technology provider ConceptONE in New York.
Looking for Overlap
Among the various reporting requirements, several highly similar data set are being identified. The largest data overlap can be found between the US Securities and Exchange Commission’s Form PF and Annex IV of Europe’s Alternative Investment Fund Managers Directive (AIFMD), fund management operations specialists tell FinOps Report. An estimated thirty to sixty percent of required data for alternative fund managers might be the same. Another synergy: reporting over-the-counter derivative transactions to trade repositories under both Dodd-Frank and EMIR. Even so, data formats and data enrichment requirements vary.
Logically, anything has to be better than the current time-consuming and costly effort, yet fund managers aren’t exactly eager to change the status quo. Surprisingly the reason is more political than technological. The obvious operational problem: “Data is stored in a multitude of applications in a multitude of formats, each with its own data model which could spell different amounts or even types of information stored for the same exact security or financial contract,” says Phil Sindel, president of financial technology and operations consultancy Olmstead Associates in Boston.
The internal politics that get in the way of fixing this: the business lines which use the data — otherwise called the data consumers — want to own the data and view it the way they prefer. Therefore, they haven’t always been too willing to cooperate with firmwide data management initiatives that involve the common data models and formats which would make life a whole lot easier for regulatory reporting purpose — but that also might open the question of ownership and control of “their” data.
The rigors of regulatory reporting might just make such an inefficient mindframe a thing of the past. As fund managers are looking at the use of uniform data models and formats get data from various applications transformed into an apples-to-apples condition, they are also seeing ways to consolidate data for reporting activities. “Fund managers might not be reducing the number of teams dedicated to regulatory reporting, but they are reducing the number of databases from which they must source the data,” says Steve Engdahl, director of product strategy for GoldenSource, a New York enterprisewide data management software firm.
Shaping Up the Data
How does this work? All of the reference data needed for the multiple regulations is retrieved from one database; all of the counterparty data from another, and all of the transactional data in yet another — likely linked to the portfolio accounting system. Creating these databases involves reconciling similar data from a lot of sources. For example, the databases associated with the various operational software applications are likely to likely have overlapping records of client information. Discrepancies in content — such as different name spellings or different ways of identifying transactions — are reconciled through automated and manual cleansing. Data identification is standardized against a common data dictionary. These “normalized” databases feed a reporting engine, which collects the data points required for a report, reformats them to the regulatory requirements and prepares the report.
Yet another scenario winning some favor is an even more consolidated data management approach — centralizing the entire data aggregation and cleansing mechanism for reporting. Coined a regulatory data factory, the data necessary for regulatory reporting is not retrieved at will each time a report must be filed, but is actually continually uploaded from various applications to a single repository which would ultimately feed a reporting engine, according to Siddharth Sakhardande, associate director of banking and financial services at global information technology consultancy Mindtree in Warren, New Jersey. In either case, a regulatory specialist team would likely have to review any report before a compliance officer signs off.
“The backbone of an effective regulatory enterprise risk managaement system is a technology-driven means of aggregating data from front, middle and back offices and third parties to a data warehouse which can be used to populate the requisite outputs and to transmit electronically to regulatory agencies,” agrees Kaminsky.
Centralized vs. Distributed
A more revolutionary approach, but one that promises more long-term business benefit, according to its proponents, is one that doesn’t involve any specialized database at all. Rather, it makes data available for any purpose, and delivered almost instantly and pre-formated for the use of the application or person that requests it. That is because middleware is used to create a distributed or “virtual” data management layer, based on metadata about the data’s type, location and original format, as well as the formatting requirements of any destination that needs it. The virtual layer is also able to reconcile inconsistent data identification schemes, and could even attach to normalizing engines to cleanse content discrepancies.
How popular is such an approach? Apparently, far more so for large investment banks and broker-dealers than fund managers. The reason: it’s easier to sell a data virtualization project to C-level management when there are far more business lines, larger-scale issues of data efficiency and deeper pockets involved. Most fund managers can’t afford the time and effort involved in setting it up, or don’t understand its merits, say data management experts.
Those data management specialists also question whether storage and retrieval of the data is the most urgent concern, when it comes to regulatory reporting. “The far more important questions are whether it is accurate and whether the same answers will be provided in multiple reports for the same timeframe, so similar questions can be answered consistently,” notes Kaminsky. “A compliance officer signing off on a report’s accuracy can’t possibly know each separate dataset, so he or she will have to trust his colleagues have done their jobs correctly.”
The risk of inconsistency doesn’t just reside in static reference and counterparty data. Responses to similar questions regarding assets under management, borrowings and leverage may be different simply because of different interpretations and diverse protocols for calculation.
Handing Off the Work
With accuracy critical and the cost of creating a centralized database or any other system to access the necessary data for regulatory reporting potentially so high, it might just be advisable to have someone else do the work. Enter the world of data utilities, such as as the one offered by Smartstream which retains cross-asset and pricing data. The advantage: spreading across multiple institutions the cost of gathering and maintaining common data elements.
“It’s a way of buying data and having all of the scrubbing and validation taken care of in a central location, instead of duplicating the effort across all subscribers to that data,” explains Nick Taplin, global director in London for pre-sales for Smartstream’s reference data utility. Rather than depending on one data source to cleanse another — which is the the traditional approach — centralizing the effort makes it cost-effective to apply alternative techniques from managing the impact of corporate actions to validation within each data feed and as a last resort human investigation and remediation.
Individual financial firms, says Taplin, can’t cost-justify the time and money a utility can spend on these tasks. By contrast, a data utility can because the work — for example, researching the granular history and underlying data to cleanse a single data point — is done for all clients and the correction delivered to all clients. The utility also has the resources to proactively manage data points that are most susceptible to inaccuracy rather than just assuming that if one vendor agrees with another the data must be correct.
Heightened Data Governance
Regardless of which data architecture fund managers chose to reduce the reporting chaos, it will bring them to the issue of data governance. A huge topic with multiple facets, data governance is generally understood to mean activities that promote data quality, consistency across applications, and cross-platform usability. In terms of regulatory reporting, the key questions to be answered may be who is using the data, who controls its accuracy, and who is responsible for making any necessary changes. The first question — which also asks where the reporting data will come from — has obvious answers: if it must meet a reporting requirement, it will likely be coming from the trading desk, middle office risk management and collateral management departments. and the back-office clearance and settlement departments.
The second two are far more difficult to answer. If business lines “own” the data they use and correct and change it at will, the potential for discrepancies in the same data across the firm get pretty high. One of the strongest arguments for a more centralized approach to data management — whether the traditional database and normalization engine approach or the virtual data management approach — is the ability to monitor and control consistency and accuracy. Along with that argument comes the idea of a “data czar” with a team of data specialists overseeing data processes from a top-down orientation.
“Alternative fund managers are well-advised to appoint an internal data chief data officer to coordinate the data inputs and outputs and oversees the various parties responsible for data flows in among the front, middle and back-offices and third-party service providers,” says Kaminsky. “Additionally, firms need to align their third-party reporting to mitigate disparities that can bring on unwelcome and unnecessary regulatory or investor scrutiny.”
Despite the merits, neither a data utility nor a data czar appear to have appealed to many fund managers. One possible reason: even the most progressive managers might be unwilling or unable to incur the explicit and implicit costs — ruffling multiple internal feathers — of setting up a better data management mousetrap. Practically speaking, large banks and broker-dealers typically have far more data feeds and business lines, so there are more potential points of error and higher financial risk.
So far, Smartstream’s data utility has only attracted users from a handful of tier-one investment banks, although interest is starting from buy-side firms who understand the merits of such a business model. SWIFT’s utility for know-your-customer data is used strictly by large brand-name correspondent banks. Still in development, Depository Trust & Clearing Corp.’s utility for counterparty data is being promoted by large bank and broker-dealer customers.
Justifying the Investment
Why fund managers aren’t showing up in these lists could be attributed to a lack of access in some cases. Or it could be that their immediate concerns are more about transaction data than other categories. “Buy-side firms certainly need greater data accuracy, because they run all of their portfolio modelling and compliance engine off of data, but the data set is far smaller,” explains Taplin. “Transactional data poses the clear and present danger, so there is a much lower perceived cost to inaccurate reference data.”
However, their reasoning for not capitalizing on either new data architecture, outsourced data utilities, or even data oversight could be flawed. Avoiding regulatory enforcement for not complying with reporting requirements is just one way of justifying the cost. If buy-side firms are investing, calculating their exposures, and running their Monte Carlo risk simulations on the basis of bad data — such as the wrong security, issuer, price or customer risk profile — they could find themselves drowning in financial hot water.
Yet another benefit beyond the avoidance of a regulatory crackdown or bad business: additional revenues. Enter the world of data analysis or data mining. In the case of regulatory data, the opportunity to link together the seemingly unrelated pieces of information regarding customers, counterparties and issuers can enable fund managers to understand the depth eof their relationships — particularly with institutions or firms that might be in all three categories. “Recognizing that valuable information exists about the creditworthiness of an issuer, for example, provides keen insights when assessing the risk of taking on counterparty exposure with the same firm,” explains Engdahl.
UPDATE: JULY 30, 6:40 PM
Since the publication of this article, Depository Trust & Clearing Corp. has issued a statement about its industry-owned client data and documentation utility. Matthew Stauffer, chief executive officer of Clarient Global LLC, parent of the utility called Clarient Entity Hub, was unavailable for an interview, but offered the following information in response to emailed questions from FinOps Report:
Clarient Entry Hub is currently in user testing and is expected to go live in the fourth quarter of 2014. Although large banks –BNY Mellon, Barclays, Credit Suisse, Goldman Sachs, JP Morgan Chase and State Street–are co-founders of Clarient Global, over ten of the world’s top asset managers and hedge fund managers are participating in the “Clarient User Partner Program.”
When live, Clarient Entry Hub will provide fund managers, broker-dealers and banks with increased control, standardization and transparency during the client onboarding process and throughout ongoing client lifecycle events. The goal is to help industry participants better address evolving risk management and regulatory requirements, such as Know-Your-Customer and FATCA, as well as well as other client data and document challenges. Clarient Entry Hub, which will leverage current client data from DTCC’s subsidiary Avox and standing instructions database Omgeo ALERT, will be fully integrated with DTCC Client Reference Data and Enrichment Service, a portal to collect client counterparty information required for delegated trade reporting under the European Market Infrastructure Regulation.
Providers of information, such as investment managers, hedge fund managers, and corporates will be able to securely upload, maintain, distribute and store data and documents through an intuitive interface allowing them to control who has access to the information. Consumers of information, such as banks and broker-dealers can do the same. Access to legal entity records, specific information and documents will be handled on a permissioning basis, controlled by the provider or owner of the information. There are no fees planned for content providers to Clarient Entry Hub, while consumers will pay “utility-based” fees.
Leave a Comment
You must be logged in to post a comment.