What do you understand by MapReduce? Explain how MapReduce works.

 MAP-REDUCE PROGRAMMING

  • MapReduce is a common programming approach for developing da intensive applications and deploying them on clouds. MapReduce is a software framework that allows to quickly write programs that process massive volumes of da (multi-terabyte datasets) in parallel across enormous clusters (thousands of nodes) in a fault-tolerant way A MapReduce job typically divides the incoming data set into distinct pieces that are processed in parallel by the map jobs. The framework sorts the map outputs, which are subsequently fed into the reduction jobs. Typically, both the job's input and output are saved in a file system. The framework manages to ask for schedule, task monitoring, and task re-execution.
  • MapReduce is a programming engine is developed by Google for processing huge amounts of data that expresses an application's computational logic in two basic functions: map and reduce. The distributed storage infrastructure, such as the Google File System, is in charge of giving access to data, duplicating files, and finally relocating them where needed. So, application developers have access to an interface that displays data at a higher level: as a collection of key-value pairs. MapReduce applications' processing is thus arranged into a workflow of the map and reduce operations that are completely controlled by the runtime system; developers need only specify how the map and reduce functions act on the key-value pairs.

Working of MapReduce

  •  MapReduce performs two critical functions: it filters and distributes work to various nodes within the cluster or map, a function known as the mapper, and it organizes and reduces the data from each node into a coherent answer to a query, known as the reducer.


  • MapReduce runs in parallel over huge cluster sizes to spread input data and collate outcomes. Because cluster size does not affect the final output of a processing operation, workloads can be distributed a practically any number of computers. As a result, MapReduce makes software development easier MapReduce is accessible in a variety of programming languages, including C, C++, Java, Ruby, Perl, and Python. MapReduce libraries allow programmers to build jobs without having to worry about communication or coordination between nodes.
  • MapReduce is also fault-tolerant, with each node sending its status to a master node regularly. If a node fails to reply as expected, the master node reassigns that portion of the work to other nodes in the cluster that are available. This enables resilience and makes it viable for MapReduce to run on affordable commodity servers.


  • MapReduce's strength is in its ability to handle large data sets by distributing processing over many nodes and then combining or reducing the outputs of those nodes. Users might, for example, use a single server application to list and tally the number of times each word appears in a novel, but this is time-consuming. Users, on the other hand, can divide the workload among 26 individuals, such that each person takes a page, writes a word on a separate piece of paper, and then takes a new page when they are completed. This is MapReduce's map component. And if someone leaves, someone else takes his or her position. This highlights the fault-tolerant nature of MapReduce. When all of the pages have been processed, users organize their single-word pages into 26 boxes, one for each letter of the word. Each user takes a box and alphabetically organizes each word in the stack. The number of pages with the same term is an example of MapReduce's reduce feature.

  • Reduce might be used by a social networking site to assess users' possible friends, coworkers, and other contacts based on on-site activity, names, localities, employers, and a variety of other data variables. A booking website may utilize MapReduce to assess customers' search criteria and history activity, and en produces personalized options for each.
  • The computation paradigm described by MapReduce is relatively simple, allowing for improved efficiency for the developers who write methods for processing massive amounts of data. In the case of Google, where the majority of the information that has to be processed is kept in textual form and is represented by Web pages or log files, this strategy has shown to be successful. Distributed grep, count of URL-access frequency, reverse web-link graph, term vector per host, inverted index, and distributed sort are some of the examples that show the flexibility of MapReduce. These examples are mostly focused on text-based processing. MapReduce may also be utilized to address a broader range of issues with some modifications. An exciting use is in the field of machine learning, where statistical algorithms such as Support Vector Machines (SVM), Linear Regression (LR), Neural Network (NN), etc. are expressed as a map and reduce functions. Other intriguing applications may be found in the realm of compute-intensive applications, such as the high-precision calculation of Pi.

Comments

Popular posts from this blog

What are different steps used in JDBC? Write down a small program showing all steps.

Suppose that a data warehouse for Big-University consists of the following four dimensions: student, course, semester, and instructor, and two measures count and avg_grade. When at the lowest conceptual level (e.g., for a given student, course, semester, and instructor combination), the avg_grade measure stores the actual course grade of the student. At higher conceptual levels, avg_grade stores the average grade for the given combination. a) Draw a snowflake schema diagram for the data warehouse. b) Starting with the base cuboid [student, course, semester, instructor], what specific OLAP operations (e.g., roll-up from semester to year) should one perform in order to list the average grade of CS courses for each BigUniversity student. c) If each dimension has five levels (including all), such as “student < major < status < university < all”, how many cuboids will this cube contain (including the base and apex cuboids)?

Discuss classification or taxonomy of virtualization at different levels.