The Just-In-Time Data Analytics Stack – The New Stack

0

Navin Sharma

Navin Sharma is Vice President of Products at Stardog, a leading provider of Enterprise Knowledge Graph (EKG) platforms. For more information, visit www.stardog.com.

The notion of just-in-time (JIT) analytics is one of the most recent and significant developments in the data landscape. This new concept not only eliminates many of the conventional limitations that have hindered the use of analytics by businesses, but also stimulates a new capability for continuous data intelligence that dramatically increases the value derived from data analytics.

JIT analytics enables organizations to connect, query, search, integrate and analyze their data wherever it resides without moving or copying it. As the name just-in-time suggests, companies can do this when business processes demand such information in a dynamic and transparent way.

Backed by a small series of fast-growing startups that directly support some of today’s biggest trends in data management (including data fabrics and data mesh), just-in-time analysis revolutionizes traditional analysis with a more comprehensive approach.

As a result, they can forgo many of the laborious, time-consuming, and expensive infrastructure investments in elaborate data pipelines built on data replication in their respective ecosystems. Instead, just-in-time analysis allows users to increase flexibility, analysis speed, and most tellingly, enter business logic at the data layer instead of locking it down. in the storage layer in endless silos.

With data-driven decision making and analytics at the heart of competition in today’s knowledge economy, software companies are racing to see who can cut the time and cost of generating insights based on the data. As a result, the phenomenon of just-in-time analysis is leapfrogging older techniques.

The end of data consolidation

JIT analysis signals the end of the era of data consolidation in which data management relied on organizations moving data across warehouses, data lakes, and data lakehouses.

By leveraging a variety of approaches, including data materialization, query tool abstractions, and data virtualization, this new era of analytics obsoletes the need to move data to store it in one place for analysis.

JIT represents a substantial shift in fundamental data strategy that not only results in better and faster analytics, but also protects the business from many gaps. Costs (related to data pipelines, manual code generation, etc.) are reduced, improving resource conservation.

Additionally, organizations significantly reduce regulatory and data privacy risks by not constantly copying data from one environment to another, potentially exposing them to costly penalties for non-compliance.

JIT represents a substantial shift in fundamental data strategy that not only results in better and faster analytics, but also protects the business from many gaps.

With so much data scattered across multicloud, hybrid cloud and polycloud deployments, this advantage is invaluable. Even better, business logic (schema, definitions, end-user understanding of the relationship between data and business goals) is no longer locked into silos but is transported to the data layer for greater transparency and clarity. ‘usefulness.

With most just-in-time analysis approaches, there is still some movement of data. But it’s minimal, well-documented, and only happens when a business process warrants it, as opposed to copying all the data from place to place before it’s used.

Overall analysis improvement

Perhaps the most notable distinction regarding data logging and analysis is both the amount of data involved in (and the quality of) the results.

There is a causal relationship between these effects, because the more data users have to analyze a particular situation or use case, the easier the analysis becomes. This is one of the reasons why data virtualization techniques are becoming increasingly popular for JIT analysis.

With this approach, users can connect to all of their sources in a holistic data structure to increase the amount of data required for a use case (like create machine learning models), the variety of this data and the number of features that can be derived for real-time information.

When there is an abundance of diverse data, users do not need a sophisticated algorithm to accurately train machine learning models for churn. In fact, when data virtualization techniques are augmented with query federation and a semantic knowledge graph data model, organizations can expand the range of forms their analysis takes.

Access to enough clean and timely data can reduce many analytical problems to simply answering questions. In the life sciences, for example, by simply connecting to enough data, even global suppliers can manage their global supply chain needs with a query that dramatically reduces the analytical complexity of this age-old problem.

Additionally, the Knowledge Graph data model semantic standards and its business terminology support semantic search to quickly sift through data for data discovery and other use cases.

Value time

Besides the quality of the analyzes produced and the different forms of analysis supported by JIT analyses, its main value proposition is its time to value.

When users constantly replicate data from place to place, they essentially create silos that take longer (and make it harder) to integrate sources for loading applications or analytics. Changing business requirements and new sources can break the existing pattern, forcing data modelers to spend long periods of time recalibrating models while fleeting business opportunities arise.

The combination of data virtualizationquery federation and graphical data models enable organizations to quickly combine different schemas through techniques for automating the logical inferences needed to do so at the time when integration is needed.

So organizations can integrate real-time data for their analytics or application needs to get answers faster and into the hands of business users who benefit.

Analytical innovation

The current attention of the startup communities of investors and data providers on just-in-time analytics may be relatively low now, but it is growing at a rapid pace that is equal to that of companies in the knowledge economy today.

These analytics approaches reduce cost and time to value while increasing the tangible business utility that analytics produces. As a result, organizations can take advantage of analytics more than ever before while making the overall process more affordable with a much faster time to value.

Characteristic picture Going through Pixabay.

Share.

About Author

Comments are closed.