Go Back   2023 2024 MBA > MBA > Online MBA Discussions

  #1  
Old 3rd November 2016, 11:45 AM
Senior Member
 
Join Date: Aug 2012
Default Hadoop Mapreduce

Discuss about hadoop mapreduce here. Welcome to MBA.ind.in and this page is for hadoop mapreduce discussion. If you are looking for hadoop mapreduce then ask your question is as much details as possible in the “Quick Reply” box provided below. The more detailed your question will be, the more easy will it be for our experts to answers your query. And if you have any information on hadoop mapreduce, then please share you knowledge with our viewers in the “Quick Reply” box. You knowledge can help many people. Thanks for stopping by at MBA.ind.in. Please visit again.
Reply With Quote Quick reply to this message
  #2  
Old 10th March 2018, 02:15 PM
Unregistered
Guest
 
Default Re: hadoop mapreduce

Can you tell me if Tutorials Point offer Tutorials for Hadoop MapReduce? Provide me the details of the Tutorials for Hadoop MapReduce as offered by Tutorials Point?
Reply With Quote Quick reply to this message
  #3  
Old 10th March 2018, 02:18 PM
Super Moderator
 
Join Date: Mar 2013
Default Re: hadoop mapreduce

Yes, Tutorials Point offers Tutorials for Hadoop MapReduce. Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.

The Algorithm
Generally MapReduce paradigm is based on sending the computer to where the data resides.
MapReduce program executes in 3 stages, namely map stage, shuffle stage, and reduce stage.
Map stage: The map or mappers job is to process the input data. Generally the input data is in the form of file or directory and is kept in the Hadoop file system (HDFS). The input file is passed to the mapper function line by line. The mapper processes the data and creates several small chunks of data.
Reduce stage: The stage is the blend of the Shuffle stage and the Reduce stage. The Reducers job is to process the data that comes from the mapper. After processing, it produces a new set of output, which will be stored in the HDFS.

During a MapReduce job, Hadoop sends the Map and Reduce tasks to the appropriate servers in the cluster.
The framework manages all the details of data-passing such as issuing tasks, verifying task completion, and copying data around the cluster between the nodes.
Most of the computing takes place on nodes with data on local disks that reduces the network traffic.
After completion of the given tasks, the cluster collects and reduces the data to form an appropriate result, and sends it back to the Hadoop server.

Tutorials Point - Tutorials for Hadoop MapReduce




Reply With Quote Quick reply to this message
Reply

Similar Threads
Thread Thread Starter Forum Replies Last Post
Bank Of America Hadoop Interview Questions Unregistered Main Forum 1 1st February 2018 02:51 PM


Quick Reply
Your Username: Click here to log in

Message:
Options




All times are GMT +5.5. The time now is 02:13 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Search Engine Friendly URLs by vBSEO 3.6.0 PL2

1 2