online gambling singapore online gambling singapore online slot malaysia online slot malaysia mega888 malaysia slot gacor live casino malaysia online betting malaysia mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 The Amazing Applications of Graph Neural Networks

摘要: The predictive prowess of machine learning is widely hailed as the summit of statistical Artificial Intelligence. Vaunted for its ability to enhance everything from customer service to operations, its numerous neural networks, multiple models, and deep learning deployments are considered an enterprise surety for profiting from data.



But according to Franz CEO Jans Aasman, there’s just one tiny problem with this lofty esteem that’s otherwise accurate: for the most part, it “only works for what they call Euclidian datasets where you can just look at the situation, extract a number of salient points from that, turn it into a number in a vector, and then you have supervised learning and unsupervised learning and all of that.”

Granted, a generous portion of enterprise data is Euclidian and readily vectorized. However, there’s a wealth of non-Euclidian, multidimensionality data serving as the catalyst for astounding machine learning use cases, such as:

■ Network Forecasting: Analysis of all the varying relationships between entities or events in complex social networks of friends and enemies yields staggeringly accurate predictions about how any event (such as a specific customer buying a certain product) will influence network participants. This intelligence can revamp everything from marketing and sales approaches to regulatory mandates (Know Your Customer, Anti-Money Laundering, etc.), healthcare treatment, law enforcement, and more.

■ Entity Classification: The potential to classify entities according to events—such as part failure or system failure for connected vehicles, for example—is critical for predictive maintenance. This capability has obvious connotations for fleet management, equipment asset monitoring, and other Internet of Things applications.

■ Computer Vision, Natural Language Processing: Understanding the multidimensionality of the relationships of words to one another or images in a scene transfigures typical neural network deployments for NLP or computer vision. The latter supports scene generation in which, instead of machines looking at a scene of a car passing a fire hydrant with a dog sleeping near it, these things can be described so the machine generates that picture.

Each of these use cases revolves around high dimensionality data with multifaceted relationships between entities or nodes at a remarkable scale at which “regular machine learning fails,” Aasman noted. However, they’re ideal for graph neural networks, which specialize in these and other high-dimensionality data deployments.

High-Dimensionality Data

Graph neural networks achieve these feats because graph approaches focus on discerning relationships between data. Relationships in Euclidian datasets aren’t as complicated as those in high-dimensionality data, since “everything in a straight line or a two-dimensional flat surface can be turned into a vector,” Aasman observed. These numbers or vectors form the basis for generating features for typical machine learning use cases.

Examples of non-Euclidian datasets include things like the numerous relationships of over 100 aircraft systems to one another, links between one group of customers to four additional ones, and the myriad interdependencies of the links between those additional groups. This information isn’t easily vectorized and eludes the capacity of machine learning sans graph neural networks. “Each number in the vector would actually be dependent on other parts of the graph, so it’s too complicated,” Aasman commented. “Once things get into sparse graphs and you have networks of things, networks of drugs, and genes, and drug molecules, it becomes really hard to predict if a particular drug is missing a link to something else.”

Relationship Predictions

When the context between nodes, entities, or events is really important (like in the pharmaceutical use case Aasman referenced or any other complex network application), graph neural networks provide predictive accuracy by understanding the data’s relationships. This quality manifests in three chief ways, including:

■ Predicting Links: Graph neural networks are adept at predicting links between nodes to readily comprehend if entities are related, how so, and what effect that relationship will have on business objectives. This insight is key for answering questions like “do certain events happen more often for a patient, for an aircraft, or in a text document, and can I actually predict the next event,” Aasman disclosed.

■ Classifying Entities: It’s simple to classify entities based on attributes. Graph neural networks do this while considering the links between entities, resulting in new classifications that are difficult to achieve without graphs. This application involves supervised learning; predicting relationships entails unsupervised learning.

■ Graph Clusters: This capability indicates how many graphs a specific graph contains and how they relate to each other. This topological information is based on unsupervised learning.

Combining these qualities with data models with prevalent temporal information (including the time of events, i.e. when customers made purchases) generates cogent examples of machine learning. This approach can illustrate a patient’s medical future based on his or her past and all the relevant events of which it’s comprised. “You can say given this patient, give me the next disease and the next chance that you get that disease in order of descending chance,” Aasman remarked. Organizations can do the same thing for customer churn, loan failure, certain types of fraud, or other use cases.

Topological Text Classification, Picture Understanding

Graph neural networks render transformational outcomes when their unparalleled relationship discernment concentrates on aspects of NLP and computer vision. For the former it supports topological text classification, which is foundational for swifter, more granular comprehension of written language. Conventional entity extraction can pinpoint key terms in text. “But in a sentence, things can refer back to a previous word, to a later word,” Aasman explained. “Entity extraction doesn’t look at this at all, but a graph neural network will look at the structure of the sentence, then you can do way more in terms of understanding.”

This approach also underpins picture understanding, in which graph neural networks understand the way different images in a single picture relate. Without them, machine learning can just identify various objects in a scene. With them, it can glean how those objects are interacting or relate to each other. “[Non-graph neural network] machine learning doesn’t do that,” Aasman specified. “Not how all the things in the scene fit together.” Coupling graph neural networks with conventional neural networks can richly describe the images in scenes and, conversely, generate detailed scenes from descriptions.

轉貼自: Inside Big Data

若喜歡本文,請關注我們的臉書 Please Like our Facebook Page: Big Data In Finance



  • 找不到回應