摘要: Parallel Computing, also referred to as Parallel Processing, is a computing architecture that uses several processors to handle separate parts of an overall task.
Parallel Computing, also referred to as Parallel Processing, is a computing architecture that uses several processors to handle separate parts of an overall task. The architecture divides complex problems into small computations and distributes the workload between multiple processors working simultaneously. Parallel computing principles are employed in most supercomputers to solve humanity’s pressing issues and concerns.
WHAT IS PARALLEL COMPUTING AND HOW DOES IT WORK?
Parallel Computing employs multiple processors to work on solving problems simultaneously. The architecture divides complex problems into smaller calculations to be solved simultaneously using several computing resources. Data scientists frequently employ parallel computing in situations that require substantial computing power and processing speed. It extends the available computing power for better performance and task resolution.
Old-school serial computing (sequential processing) is used to complete one task using one processor. Parallel computing involves the parallel execution of several operations. Unlike serial, parallel architecture can decompose a task into parts and multi-task it. Parallel computer systems are particularly good at modeling and simulating real-world events.
AMDAHL’S LAW AND GUSTAFSON’S LAW
In an ideal parallel computing environment, the speedup from parallelization would be straight linear to double the number of processing elements, resulting in a 50% decrease in runtime, while doubling it a second time should result in a 100% decrease in runtime. However, very few parallel algorithms achieve optimal speedup. The majority of the algorithms have a near-linear speedup for a few processing elements. Amdahl’s Law formula is used to find the potential speedup on a parallel computing platform.
Amdahl’s formula only applies when the problem size is constant. In practice, as more computing resources become accessible, they are used on larger datasets, and the time spent in the parallelizable portion rises dramatically. Gustafson’s Law is used to produce more realistic assessments of parallel performance in dynamic problem sizes.
PARALLEL COMPUTING IN DATA MINING AND ANALYSIS
There are two types of data-mining applications: Those intended to explain the most variable parts of the data set and those intended to understand the majority of the data set’s variations without regard for the outliers. Scientific data mining falls under the first type (finding an unexpected stellar object in this sky). In contrast, commercial applications are second (understanding the customers’ buying habits). Parallel computing is essential for the first types of mining applications and is considered a critical approach for data mining overall.
Based on biological neural systems, artificial neural networks are non-linear predictive models. Data mining and analysis are increasingly utilizing parallel programming methods like Map Reduce. Map Reduce is a batch-oriented parallel computing technique. There’s still a significant performance gap between relational databases. With Map Reduce parallel programming, many machine learning and data mining algorithms have been applied to enhance the real-time aspect of big-scale data processing. Because of this, data mining algorithms frequently have to scan through the training data to collect the statistics used to solve or improve the model.
OTHER PARALLEL COMPUTING EXAMPLES & APPLICATIONS
Parallel computing is now being used in various applications, ranging from computational astrophysics to geoprocessing, seismic surveying, climate modeling, agriculture estimates, financial risk management, video color correction, computational fluid dynamics, medical imaging, and drug discovery.
MEDICINE & DRUG DISCOVERY
Emerging technologies are changing the medical landscape in numerous ways. Parallel computing has long been a factor in this area, but it appears poised to drive even more discoveries. Radiologists can readily employ AI technology, which aids imaging systems with increasing data and computational burden. GPU-based multi-processing systems enable physicians to produce imaging models with ten times fewer data than previously required.
Simulations of molecular dynamics are highly beneficial in drug discovery, and parallel programming is an excellent architecture. Advanced parallel computing also allows for the detailed examination of molecular machinery, which could lead to significant applications in the study of genetic disease research. Parallel processing’s data-analytics strength holds tremendous potential for public health, alongside graphic rendering and pharmaceutical research.
RESEARCH & ENERGY
Parallel computing is also used in various scientific research, including astrophysics simulations, seismic surveying, quantum chromodynamics, and more. For example, a parallel supercomputer recently made a significant advance in black holes research. According to scientists who’ve solved a four-decade-old mystery, the object trapped inside black holes rotates around and collapses into them. This breakthrough is critical to scientists’ attempts to discover how this enigmatic phenomenon works.
COMMERCIAL
Parallel computing is typically utilized in academic and government research, but businesses have also noticed it. The banking industry, investment industry, and cryptocurrency traders use GPU-powered parallel computing to achieve results. Parallel computing has a long history in the entertainment world, and it’s also beneficial to sectors that rely on computational fluid dynamics. GPU-accelerated technologies are used across nearly every major component of today’s banking, from credit scoring to risk modeling to fraud detection.
It’s not uncommon for a bank to have tens of thousands of most powerful GPUs these days. JPMorgan Chase was one of the first adopters, announcing in 2011 that switching from CPU-only to GPU-CPU hybrid processing had boosted risk calculations by 40 percent. Wells Fargo has been using Nvidia GPUs for various tasks, including accelerating AI models for liquidity risk and virtualizing its desktop infrastructure. GPUs are also at the heart of the crypto-mining mania.
DIFFERENCE BETWEEN PARALLEL, DISTRIBUTED, AND GRID COMPUTING
The goal of parallel computing is to speed up computation by performing several independent computational activities simultaneously on multiple units, processors, or numerous computers spread across large geographical scales connected over a network (distributed and grid computing). Parallel, distributed, and grid computing architectures rely on multiple computing resources to solve complex problems.
Multiple processors execute numerous responsibilities at the same time in parallel computing. The system’s memory may be combined or distributed. Concurrency provides time and money savings for organizations. In distributed computing, numerous autonomous computers work and appear as a single system. Computers communicate with each other through message passing rather than sharing memory. One activity is split across many machines.
The main difference between distributed and grid computing is that a centralized manager handles the resources in distributed computing. At the same time, each node has its resource manager in grid computing, and the system does not function as a single unit.
轉貼自Source: dataconomy.com
若喜歡本文,請關注我們的臉書 Please Like our Facebook Page: Big Data In Finance
留下你的回應
以訪客張貼回應