Explain Parallel computing with it's advantages & disadvantages.

PARALLEL COMPUTING

  • Parallel computing is a sort of computer architecture in which many processors simultaneously execute or process an application or calculation. Parallel computing aids in the performance of big calculations by splitting the workload across several processors, all of which work on the computation at the same time. The majority of supercomputers run using parallel computing methods. Parallel processing is another name for parallel computing.
  • Parallel processing is typically used in operating environments/scenarios that need large computing or processing capability. Parallel computing's primary goal is to enhance available computing power for quicker application processing or job resolution. Parallel computing infrastructure is often hosted in a single facility where multiple processors are deployed in a server rack or independent servers are linked together.
  • The application server provides a calculation or processing request that is broken down into little pieces or components that are processed concurrently on each processor/server. Parallel processing can be divided into bit-level, instructional-level, data-level, and task-level parallelism.

 

Parallel computing, in its most basic form, is the utilization of many computer resources to solve a computational problem at the same time:

  • A problem is broken into discrete parts that can be solved concurrently 
  • Each part is further broken down into a series of instructions
  • Instructions from each part execute simultaneously on different processor 
  • An overall control/coordination mechanism is employed

The computational problem should be able to:

  • Be broken apart into discrete pieces of work that can be solved simultaneously.
  • Execute multiple program instructions at any moment in time. 
  • Be solved in less time with multiple compute resources than with a single compute resource.

The compute resources are typical:

  •  A single computer with multiple processors/cores
  • An arbitrary number of such computers connected by a network

 

Advantages

  • Parallel computing saves time by allowing programs to be executed in less time. 
  • Larger problems must be solved in a shorter period.
  • Parallel computing is far superior to serial computing for modeling, simulating, and comprehending complicated real-world phenomena.
  • Many real-world problems are so huge and/or complicated that solving them on a single computer is impractical or impossible, especially given limited computer capacity.
  • A lot of things can be performed at the same time if you use parallel computing resources.
  • It can store a large amount of data and perform speedy data calculations.

 

Disadvantages

  • Programming to target Parallel architecture is a bit more challenging and needs thorough study and practice. The usage of parallel computing allows you to address computational and data-intensive problems utilizing multicore processors, but sometimes the control algorithms may not provide excellent results which can affect the system convergence.
  • The additional cost due to increased execution time is related to data transfers, synchronization, communication, thread creation/destruction, and so on. These costs can be rather high at times and may even outweigh the benefits of parallelization.
  • To boost speed, various code tweaks must be made for various target architectures. 
  • In the case of clusters, better cooling systems are necessary.
  • Multi-core designs consume a lot of power.
  • Parallel solutions are more difficult to build, debug, and prove right, and they frequently perform worse than serial versions due to communication and coordination overhead.


Comments

Popular posts from this blog

Suppose that a data warehouse for Big-University consists of the following four dimensions: student, course, semester, and instructor, and two measures count and avg_grade. When at the lowest conceptual level (e.g., for a given student, course, semester, and instructor combination), the avg_grade measure stores the actual course grade of the student. At higher conceptual levels, avg_grade stores the average grade for the given combination. a) Draw a snowflake schema diagram for the data warehouse. b) Starting with the base cuboid [student, course, semester, instructor], what specific OLAP operations (e.g., roll-up from semester to year) should one perform in order to list the average grade of CS courses for each BigUniversity student. c) If each dimension has five levels (including all), such as “student < major < status < university < all”, how many cuboids will this cube contain (including the base and apex cuboids)?

Suppose that a data warehouse consists of the three dimensions time, doctor, and patient, and the two measures count and charge, where a charge is the fee that a doctor charges a patient for a visit. a) Draw a schema diagram for the above data warehouse using one of the schemas. [star, snowflake, fact constellation] b) Starting with the base cuboid [day, doctor, patient], what specific OLAP operations should be performed in order to list the total fee collected by each doctor in 2004? c) To obtain the same list, write an SQL query assuming the data are stored in a relational database with the schema fee (day, month, year, doctor, hospital, patient, count, charge)

Pure Versus Partial EC