Banks Bolster Regulatory Compliance Initiatives with Apache Hadoop

Categories: Financial Services

The regulatory environment under which financial institutions must operate is undeniably complex.  Over the past five years, regulatory demands have only intensified placing banks under tremendous pressure and scrutiny.  Banks are required to provide more information with greater granularity, accuracy and timeliness than ever before.  Consequently forcing them to address a much bigger problem that has dogged them for decades – ‘data silos’.

Fragmented systems, data management, quality and governance issues have long been a roadblock for banks.  Preventing them from holistically managing risk across asset classes and gaining a 360-degree view of their operations and customers. In many ways, regulations have become a catalyst for banks to address these long-standing issues and build a modern data architecture. Ongoing amendments to existing regulations such as CCAR and rise of new ones require banks to aggregate, store and analyze massive volumes of data of any age, from any source and of any type.  For a large bank, this may mean tapping into data from over 6,000 different sources and thousands of databases.  For instance:

Basel Committee on Banking Supervision (BCBS) 239

BCBS 239 sets a new level of standard for risk data aggregation and reporting for global, systematically important banks (G-SIBs).  It calls for an enterprise-wide approach to managing risk with more stringent requirements for accuracy, timeliness, reporting and governance.  The guidelines have a direct impact on how banks handle their data, their ability to aggregate risk data across multiple dimensions, ensure data lineage and governance. While BCBS 239 comes into effect in January 2016, it is expected that a number of banks will fail to meet the deadline.

Comprehensive Capital Analysis and Review (CCAR)

CCAR is a regulatory framework introduced by the Federal Reserve. It is designed to help assess, regulate and supervise banks and bank holding companies. At its core, CCAR requires that banks perform stress tests with the goal to improve capital adequacy and their capital planning processes while ensuring organizational solvency under severe adverse conditions. In 2016, it is expected that the Federal Reserve will intensify its scrutiny on banks and will require them to improve data integrity, reconciliation, risk identification and controls even further.

To ensure adherence to these and other regulations, leading banks are opting for Cloudera Enterprise powered by Apache Hadoop. Cloudera Enterprise provides a path forward to building a modern data architecture while enabling banks to leverage their existing IT infrastructure. Banks can aggregate, store and analyze any volume, age and type of data required for stress tests while reducing the costs associated with maintaining the data and running the models. For instance, Apache Spark – a key component of Cloudera Enterprise – provides a cost-effective way to operationalize CCAR models. The algorithms used in the risk models require an iterative and cyclic data flow.  The in-memory processing capabilities of Spark enable users to optimize and run risk models more frequently.  Moreover, security, data lineage and governance are an integral part of Cloudera Enterprise.

The cost of compliance is undoubtedly high.  Over the past few years, banks have disproportionately increased their compliance budgets.  Larger banks that rank among the top 10 are allocating as much as $4-$1 billion per annum on their compliance initiatives. However, the cost of non-compliance is even higher.

facebooktwittergoogle_pluslinkedinmail

One response on “Banks Bolster Regulatory Compliance Initiatives with Apache Hadoop

Leave a Reply