The Top 3 Challenges of Cross-Platform Data Access Governance

Published: 01 September 2022

Cover Image Binoculars

Cloud data storage and computing have all but become the standard for modern data usage. According to 451 Research, 90% of organisations currently rely on the cloud, and that number is continuing to grow at an accelerating pace. In Immuta’s 2022 State of Data Engineering Survey, we found that 81% of organisations expect to be “entirely” or “primarily” cloud-based within the next two years. Of these organisations, 62% plan to adopt two or more cloud data platforms–such as Snowflake, Databricks, or Starburst–within that time frame.

That said, cloud data platform migration and adoption is easier said than done. Thinking back to the emergence of Apache Hadoop, this new technology promised to help bridge the gap between RDBMS and EDW by reducing costs, enabling scalability, increasing processing speeds, centralising data lakes, and enabling the processing of semi-structured and unstructured data. That said, Hadoop still faced challenges like:

  1. “Big data” and Hadoop skills were rare and installation became challenging.
  2. Hadoop was mostly an on-premise batch processing tool.
  3. Hadoop had nascent security controls for enterprise adoption in contrast to traditional RDBMS and EDW systems.

As with the adoption of Hadoop, organisations adopting cloud to further scale data processing will face an unforeseen–but not surprising–challenge: securing  and controlling access to data across a variety of diverse cloud data platforms suited for different analytical use cases.

Within this challenge, there are a number of specific obstacles to managing a cloud data ecosystem that may bottleneck data management, delay time to data access, and stifle data’s impact. Across industries, there are a range of emerging and significant challenges related to cross-platform data access control and data governance. These are three of the most prevalent challenges–and what to know about avoiding them.

Challenge 1 - DataOps is Still On the Rise

Data for analytics and data science is moving to the cloud for the same reasons that software did–it’s the most cost effective, collaborative, flexible solution with cutting edge functionality.

Alongside increased cloud migration, the relevance of DataOps methodologies is also on the rise. For data engineering and architecture teams, DataOps is emerging as a guiding principle and ethos, just as DevOps did previously. The challenge is that DataOps as a framework is behind the shift to data as a product, creating a gap between data needs and the resources necessary to meet those needs.

[Read More] DataOps Dilemma: Survey Reveals Gap in the Data Supply Chain

Without an effective DataOps methodology, organisations adopting multiple cloud data platforms lose the capacity to reliably scale user adoption. Manual data access control implementation and upkeep requirements from each individual platform lead to burdensome work for data teams. To remain competitive and take full advantage of their data, organisations adopting multiple cloud data platforms should consider the implications for their data teams and realistically consider the resources needed to apply a DataOps framework.

Challenge 2 - Enabling a Diverse Data Stack

As data use has become increasingly widespread, modern cloud data platforms have become synonymous with reducing time to data while driving insights and operational success.

From the C-suite to individual business analysts, data consumers expect real-time insights and decision making powered by data. Why? Data has the potential to provide advanced insights and reliable predictions that can give organisations an important competitive edge. With over half of modern organisations expecting to adopt two or more cloud data platforms within the next 24 months, the need for effective data storage and analysis is growing exponentially.

Cloud platform providers, however, run the gamut in terms of functionality and purpose. Additionally, each comes programmed with a different unique set of features and capabilities, as well as new innovations to extract even more value from data. However, the onboarding of each platform also requires an adoption or migration process in which data teams must prepare data for use and safely add new users. This can pose unforeseen challenges that go well beyond time intensiveness.

As organisations modernise their data stacks and adopt multiple cloud compute platforms, the adoption process becomes considerably more complicated and time consuming for data teams. Organisations looking to maximise their data’s value need a solution that provides streamlined, cross-platform capabilities to discover, secure, and monitor data.

Challenge 3 - Manual Data Protection Processes

Data teams are the conduit between raw data and analytics-ready data that can be used to derive business-driving insights. But the steps involved to get from one state to the other, while maintaining the security of sensitive data, can be time and labour-intensive without the proper tools. When multiplied across a variety of cloud data platforms, this task becomes even more difficult.

The three most common obstacles inhibiting the provisioning of sensitive data for cloud analytics include:

  1. Effectively masking or anonymising data
  2. Successfully monitoring and auditing data use
  3. Securely controlling user access

As organisations adopt multiple cloud data platforms and handle sensitive data, the difficulty of executing these already-challenging processes will steadily multiply.

Consider role-based access control (RBAC) as an example. Traditional RBAC policies require new roles to be created for each data consumer. Once made, those policies are static–any future changes to the data or to a user’s needs or permissions necessitates the creation of a new role. This leads to “role bloat,” where data engineers and architects need to manage hundreds or thousands of user roles just to control access to data in specific tables or databases–and that’s just on a single cloud platform. Add in multiple platforms, and data teams are quickly overburdened with managing copies, roles, and access.

With attribute-based access controls (ABAC), policies can be written once and dynamically applied across cloud data platforms. These policies only need to be created once, and can be maintained and updated in one location in order to be effective throughout a cloud data ecosystem. This rids data teams of the necessity of taxing manual policy upkeep, and creates a data stack prepared to scale as new data sources, data users, and cloud data platforms are added.

[Read More] RBAC vs. ABAC: Future-Proofing Access Control

How to Meet the Challenges of Cross-Platform Data Access Control

The future of cloud data analytics and data science is already upon us. Whether data-driven organisations succeed or fail in their data initiatives will depend on the imminent decisions they make regarding data access control and governance.

With businesses likely leveraging multiple cloud data platforms across their data ecosystem, a central platform to manage data access control, policy enforcement, governance, and privacy management has emerged as a virtual necessity.  Adopting a single solution for managing consistent, auditable access to all cloud data is critical for deriving the highest value from data–without putting it at risk. 

Don’t get left behind in the shift to the multi-cloud future of data use and analytics. To see what you can accomplish with Immuta’s cross-platform data access control and governance, request a demo today.