BARC finds that AI maturity depends on observability (Image Credit: getty-images-7tYTWoS8kfM-unsplash)The increased use of AI has raised questions over data quality and how it is used by AI. How organisations address these questions says much about their AI maturity. To gauge that maturity, Precisely sponsored research from the Business Application Research Centre (BARC). That research, Observability for AI Innovation (registration required), shows how important observability is to understanding what is happening with AI and data.

The report focuses on what it calls the State of Observability. It says, “We examine three distinct observability disciplines: data quality, data pipeline and AI/ML model. In each case observability refers to the measurement, monitoring and optimization of these elements.

“We find that most organizations now have formalized programs for data, pipeline and model observability. Organizations prioritize privacy, auditability, and compliance in their effort to foster Responsible AI.”

Monitoring inputs and outputs

The survey responses show an interesting level of maturity across all three areas of observability. Over 50% of respondents have implemented and optimised programs around data quality (32%/26%) and data pipeline (36%/19%) observability. Slightly fewer have done the same around AI/ML models (23%/21%). Of concern, however, is that 16% have no program at all for AI/ML model observability.

Dig deeper, and the numbers look different. Areas such as BI dashboards and standard reporting are mature technologies, although just over 40% of organisations use them. However, other areas, such as data provisioning and data warehousing, are as low as 30%. Additionally, areas such as sentiment analysis, something closely aligned with AI/ML, are just 16%.

This shows that there is considerable variability across different areas of observability. While that variability raises questions, the fact that this is not about new areas versus old areas is of real concern. It suggests that observability is more patchy than the initial findings by BARC suggest.

The gap between Europe and the US is growing

Another area of interest is the difference between the US and Europe. The US shows an average level of maturity 41% higher than Europe (88% vs 47%). That is a substantial difference between the two and one that is not shown in other reports.

Observability is just part of the solution. Another key part is governance, and this is where the survey shows increasing maturity. It says, “40 percent of observability stakeholders focus on the governance-related priorities of privacy, trustworthiness, transparency/auditability, regulatory compliance and/or model accuracy.”

However, it is important to factor in the increasing levels of legislation around privacy over the last decade. Those have driven considerable investments in data and privacy governance, and the numbers here are likely to reflect that. Had they been over 60%, that would have been more interesting. Additionally, the split between Europe and the US should show a considerable tilt towards Europe. However, they don’t.

What challenges does observability present?

The report also addresses the challenges to observability, and at first glance, there are no surprises. People, in terms of training and skills gaps, are the biggest challenge. Surprisingly, 25% identified organisational confusion/lack of leadership as a challenge. That is about more than just skill gaps. It suggests a major problem with how managers perceive and prioritise AI/ML.

The number of manual processes still in place is almost as problematic. Given the last decade of process automation, this suggests that organisations are focused on different priorities with automation. Driving that problem with processes is the lack of policies and best practices.

Resolving all of these is not impossible, but it will take a determined approach from organisations that need to rethink priorities. That starts with employing managers dedicated to dealing with data and how it is consumed by AI/ML. That needs to be followed by the establishment of better training, new processes and an overall focus on improving observability and governance of data.

Unstructured data is the future of AI

As organisations focus on building out their internal AI/ML solutions, they have had to address the challenge of unstructured data. In reality, that unstructured data is really semi-structured data. While still a challenge to deal with the vast array of data, organisations should be able to extract data from those documents. It is also a process that, once established, lends itself to high levels of automation.

Another challenge this unstructured data brings is its location. The data is spread across corporate servers and local devices. Getting the data from the latter group of devices is a real challenge. But it will deliver a wealth of data that is often lost when it comes to corporate visibility.

Enterprise Times: What does this mean?

This is a timely report as organisations continue to struggle with how to best deploy AI. Observability and data quality are unquestionably part of the solution, but so is governance.

Perhaps the biggest issue here is that we are still struggling with the issue of data quality and what that means. In the 1980s, marketing teams were happy with the data quality of around 50-60%. Today, they will settle for 80-90%. AI requires more than that, but cleaning and enhancing that data takes time and money.

Part of the solution has to be how we make data available. Historically, we’ve invested in creating massive duplicate data sets for different purposes. That is not enough for AI. We need direct access to raw data and ways to clean that data in situ. That need becomes an imperative if we are to access the vast quantity of unstructured/semi-structured data that AI wants to consume.

Interestingly, it is not controlled AI development that is driving better ways of handling data. The report shows that it is the rise of Generative AI, both private and public, that is opening that up. Private GenAI use is also seen by the authors of the report as a driving force behind greater observability. How long it will take to get there and what challenges are still to be dealt with, remain to be seen.

Overall, this is a useful document that needs to be part of the AI planning and governance teams’ library. It is also a good marker for what defines AI maturity. While most Precisely sponsored reports are carried out annually, this one should be done every six months to show what progress is being made.

LEAVE A REPLY

Please enter your comment!
Please enter your name here