Digital weaving, stitching the thread between data fabric & data mesh - Image by B360 Riding Shirts from PixabayThe information landscape that an organisation travels through is now defining the market-facing success or failure that any business can achieve. TIBCO director of digitalisation strategy Alessandro Chimera weaves between two key emerging trends to string together a rationale for data-driven advancement.

Information matters. This core reality now drives us to seek the most adaptable, manageable and scalable approaches to controlling the data that flows through an organisation. With some pressure to move towards real-time data-driven value, IT leaders are compelled to look for routes to process and analyse data the instant it is generated.

While the perfect recipe for data management has arguably never been perfected, we can see two emerging paradigms surfacing and, in some sense, also battling. Welcome to the face-off between data mesh and data fabric.

Beyond traditional data architectures

As we now work to try and push ourselves beyond traditional data architectures, our first steps must focus on evaluating the organisation’s current data estate.

From the start, openness is a prerequisite. Governed by appropriate policy-based access control, users need to be empowered with timely, consistent, and trusted data analytics through a paradigm that can grant democratised access for faster and smarter business decisions.

As we consider the balancing point between data mesh and data fabric, it’s important to remember to be future-proof from the start. Trends come and go, but foundations stay, so adopting best-of-breed technologies from the outset is essential. Both approaches have similarities, so understanding the overlap and differences is essential.

What is data fabric?

According to Gartner, a data fabric is “a design concept that serves as an integrated layer (fabric) of data and connecting processes”. That doesn’t tell us that much, but we can infer some deeper meaning. i.e. the notion of the data fabric brings data and its associated services and functions into closer proximity.

Analyst house Forrester takes us (arguably) a little closer to the source with its statement that reads as follows, Data fabric focuses on automating the process integration, transformation, preparation, curation, security, governance and orchestration to enable analytics and insights quickly for business success.”

Perhaps also a little bit big picture, Forrester’s take on data fabric is really an extended notion of data management. I.e. it encompasses every aspect of ‘data work’ (not a formal term, but it conveys the focus here) that might be carried out if an organisation has taken a progressive approach to achieving data-driven insights.

Consider data not just as a binary source of information but as a contextualised entity in a working operating business. We can say that a data fabric is an approach designed to simplify the ingestion, access, sharing, governance and security of data across all applications and environments.

Talk can be cheap, so let’s be clear about what we mean when we say applications and environments. This is a space that spans multi-cloud, hybrid-cloud, poly-cloud compute and storage instances across all enterprise computing spaces, including edge and IoT. Let’s also say that the data fabric is a tough all-weather garment. It exists to constantly improve data quality by automatically discovering the business relevance between different data sets through the application of AI models and their Machine Learning (ML) functions.

What is data mesh?

If we have some grasp of data fabric by now, then how do we explain data mesh? Put simply, data fabric is data within the realms of all aspects of a business, but data mesh is data solely within the realms of data and information.

We can see this basic idea expanded upon if we once again turn to the sages at Gartner and Forrester.

According to Gartner, A data fabric is a technology-enabled implementation capable of many outputs, only one of which is data products. A data mesh is a solution architecture for the specific goal of building business-focused data products.”

And according to Forrester, A data mesh is a decentralised sociotechnical [meaning both people and technology] approach to share, access and manage analytical data in complex and large-scale environments – within or across organisations.”

Going further, we now know that a data mesh architecture can be defined as an approach that aligns data sources by business domains or functions with data owners. Through the decentralisation of data ownership, data owners can create data products for their respective domains, meaning data consumers (including data scientists and business users) can combine these data products for data analytics and data science.

In the data mesh approach, the value is that it shifts upstream the creation of data products to subject matter experts who know the business domains best, compared to relying on data engineers to cleanse and integrate data products downstream. Furthermore, the data mesh accelerates the re-use of data products by enabling a publish-and-subscribe model and leveraging APIs, which makes it easier for data consumers to get the data products they need. This concept aligns with microservices, where a data product can be mapped to a single, specialised service.

What’s best, data mesh or data fabric?

Faced with the question, which is best, data mesh or data fabric? You already know the answer. It depends. There is no black or white answer here.

The data mesh approach is visionary, expansive and powerful, but there is no such thing as a free lunch. Adopted as a pure play paradigm, the data mesh technique could (arguably if not practically) lead to a scenario where an organisation has refactored data concepts into data products, which risks creating information silos.

Data Fabric Case Study

As a working example that helps to paint the picture here, consider a large worldwide supplier of flooring products, including carpet, tile, hardwood, laminate and more. The company was suffering from a lack of confidence in relation to its employee-related data. It was a situation brought about by multiple corporate acquisitions and the resultant sprawl of data that had scattered across multiple systems.

The solution here came by modernising the data infrastructure with a no-code/low-code data fabric able to holistically integrate customer, production and operations decisions systems to provide fast and informed decision-making. The company was able to build and establish a future-proof platform capable of being extended to manage multiple master data domains. It also provided real-time visibility into the manufacturing process to alert on issues and adjust on the fly. Ultimately, the entire business was able to rely on trusted self-service access to any data through one unified source.

Is a Data Mesh better though

We can see that data mesh is a progressive advancement. However, for every action, there is an equal and opposite reaction; the cost of that progression is the responsibility and effort needed to bring about organisational change to establish product owners in various domain teams.

That isn’t necessarily a bad practice since it empowers people to be responsible for their data. But not every company is ready to promote and assign new roles and responsibilities to build a new and modern data architecture – and for a data mesh, this is often from scratch. Doubts may surface amongst data product owners who might tend to protect their own domain. They may either subconsciously or consciously add collaboration frictions with other product owners requesting changes or claiming ownership.

As we weigh up the pros and cons of this whole discussion and think about how modern organisations are going to architect their data layer going forwards, we can at least suggest that the data fabric paradigm is a far less disruptive approach. It allows an enterprise data infrastructure to grow step-by-step in sync with the needs of the business. In most use cases, the data fabric aligns more closely with the maturity level of many of the other technical components that an organisation already operates in its IT stack. It also brings people and processes into focus, helping technology and data align with the organisation’s goals and needs.

The fundamental difference between both paradigms is that data fabric is a technology-driven approach, whereas data mesh is an organisational-driven approach.

Panera Bread turns to a Data Fabric

A prime example of data fabric at work is at Panera Bread, a well-known chain of fast-casual restaurants. In their industry, fostering an intimate connection with customers while adapting quickly to change can be difficult. Historically, changes in customer tastes, market pressures, and even rapid growth have put quality and service in peril. Real leaders comprise companies that can adapt quickly without sacrificing quality and service and improve it along the way.

With TIBCO as its foundation, Panera Bread built the Panera Data Pantry. It enabled them to understand and manage the full spectrum of their menu and supply chain data complexities. This became crucial during the onset of the COVID-19 pandemic; thousands of stores’ information needed to be updated quickly. A rapid shift was required to support the sudden and increased dependence on outside delivery services. But there was another growing problem, with in-store eating effectively shut down: what to do with their sizable perishable inventory. Thanks to their strong data management foundation, in less than two weeks, the company built and launched Panera Grocery. It enabled the company to offer bread, produce, and other menu items to customers as a grocery service — filling a need in a depleting nationwide food supply while leveraging their existing supply chain to make it happen.

Panera Bread is a prime example of a company that sought digital transformation and is thriving because of new found flexibility made possible by a modern data fabric. The company can move faster, more efficiently, and at a lower cost, and continue to delight customers along the way.

What to do next

We’ve discussed a lot here and we may have even given CIOs, CTOs and CISOs too much to think about. Whichever route an enterprise, large, medium and possibly small, does take, the best advice is to avoid big bang rip-and-replace actions that cause unnecessary disruption to current business operations. A fully future-proofed systematic methodic strategy with careful evaluation throughout is the way to progress.

It’s important for any business to protect any previous investment already made in data management solutions. That means that an organisation should think about adopting the data paradigm that can most suitably and seamlessly incorporate those solutions. Ultimately, the goal is to build a modern, future-proof data infrastructure.

It’s up to every company to understand what to take from one and from the other paradigm since maturity levels are different in organisations.

There is no magic enterprise technology vendor selling a one-size-sits-all, ready-to-install data infrastructure. Whichever way we slice the data stack of the future, the more we can increase the ability to get value out of ‘any’ (and I mean any) data, the better employees are able to make better-informed decisions to drive the business forwards.

TibcoTIBCO Software Inc. unlocks the potential of real-time data for making faster, smarter decisions. Our Connected Intelligence Platform seamlessly connects any application or data source; intelligently unifies data for greater access, trust, and control; and confidently predicts outcomes in real time and at scale.


Please enter your comment!
Please enter your name here