online gambling singapore online gambling singapore online slot malaysia online slot malaysia mega888 malaysia slot gacor live casino malaysia online betting malaysia mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 Data Governance and Observability, Explained

摘要: Let’s dive in and understand the ins and outs of data observability and data governance - the two keys to a more robust data foundation.


Data governance and data observability are increasingly being adopted across organizations since they form the foundation of an elaborate yet easy to maneuver data pipeline. Two to three years ago, the objective of organizations was to create enough proof of concept to buy the client’s trust for AI-based products, and even a simple AI feature was a differentiating factor. It could easily give an edge over the competition.

However, in today’s landscape, AI-based features are the talk of the town and have become a necessity to stay in the competition. This is why organizations today focus on building a solid foundation so that data solutions are seamless and as efficient as the production of regular software.

So, let’s dive in and understand the ins and outs of data observability and data governance - the two keys to a more robust data foundation.

What is Data Observability?

Data Observability is a relatively new term, and it addresses the need to keep ever-growing data in check. With growing innovation and wide adoption across the corporate world, the tech stacks that host data solutions are becoming more efficient. Still, at the same time, they are also becoming more complex and elaborate, which makes them difficult to maintain.

The most common issue that organizations are facing is data downtime. Data downtime is the period during which the data is unreliable. It can be in terms of erroneous data, incomplete data, or data discrepancies across different sources. Without reliable data, there can be no hope for state-of-the-art solutions.

This is where data observability comes into the picture to make data maintenance manageable. This recent and growing need has led to the emerging field of observability engineering, which has three high-level components. In simple terms, these components are formats used by data observability to aggregate data:

Metrics: Metrics are cumulative measures of data measured over a given time range.

Logs: Logs are records of events that happened across different points in time.

Traces: Traces are records of related events spread across a distributed environment.

Why is Data Observability necessary?

Data observability gives the added advantage of predicting data behavior and anomalies, which helps developers to set up resources and prepare in advance. Data observability’s key capability is to figure out the root cause or the reason that led to the recorded data performance. For instance, if the sensitivity score of a fraud detection model is relatively low, data observability will go deep into the data to analyze the why behind the relatively low score.

This capability is crucial because, unlike in regular software where most of the outcome is under the code’s control, in ML software, most of the outcome is beyond the solution’s control. This is because data is the independent factor that can even make the solution invalid with just one anomalous event. An example of such a data disruption will be the pandemic that disrupted employment rates, stock trends, voting behavior, and much more.

It is also highly likely that the solution that works consistently well on a given data group (say, data from a particular state) fails terribly on another data group.

Therefore, understanding the why behind the performances becomes the top priority while assessing the output of any data solution.

/*內容(每換一段貼一次)*/

內文

轉貼自: kdnuggets.com

若喜歡本文,請關注我們的臉書 Please Like our Facebook Page: Big Data In Finance


留下你的回應

以訪客張貼回應

0
  • 找不到回應
Powered by Komento