MapReduce Frameworks Lab
Contents
Motivation
Dealing with big data is Hansel-level hot right now. We will build two implementations to better understand the inner workings of Hadoop and Spark-like frameworks. Your frameworks will actually be more support more than just MapReduce.
The Matrix implementation gives us experience with dividing the work up into thread confined tasks whose results are then combined together.
Background
wikipedia article on MapReduce
As the name suggests, MapReduce refers to the process of mapping then reducing some stream of data. At its core, all a MapReduce algorithm entails is transforming a list of one kind of item before collecting those items and reducing them down to a single value using some computation. As you can probably tell, the concept of MapReduce is extremely general and can apply to a wide berth of problems. In this assignment, we will use MapReduce to simulate the Facebook mutual friends algorithm (finding the number of mutual friends between two people) and word count algorithm. As studios, you will use MapReduce to find which infected well(s) is causing an outbreak of cholera in historic London and to map a deck of cards.
For more information on the general concept of MapReduce, refer to this article.
Java Advice
Parameterized Type Array Tip
Creating arrays of parameterized types in Java is madness inducing. Some details are available in Java Generics Restrictions.
The example below creates an array of List<String>. The @SuppressWarnings
annotation is optional.
@SuppressWarnings("unchecked") List<String>[] result = new List[length];
Use map.entrySet()
Prefer the use of map.entrySet() over map.keySet() followed by looking up the value with map.get(key).
Use the appropriate version of forall
there are many overloaded versions of forall including:
choose the correct one for each situation where "correct" is often the one that produces the cleanest code.
Mistakes To Avoid
Warning: Arrays (and Matrices) are initially filled with null. You must fill them with instances. |
Warning: Ensure that your Slices studio is conforming to spec. |
Warning: Do NOT iterate over all of the original unsliced data from slice.getOriginalUnslicedData(). Use getMinInclusive() and getMaxExclusive() when processing your slice. |
Code To Use
To allow our frameworks to work well with JDK8 Streams, we employ a couple of standard interfaces over creating out own custom ones.
BiConsumer
We use the standard BiConsumer<T,U> interface with an BiConsumer's accept(t,u) method in the place of a mythical, custom MapContext<K,V> interface with an emit(key,value) method.
public interface BiConsumer<T, U> { // invoke accept with each key and value you wish to emit void accept(T t, U u); }
Collector
read the javadoc for Collector
check out the Collector_MapReduce_Studio#Background Collector Studio Background
A Path To Victory
- Int_Sum_MapReduce_Apps_Studio
- Collector_MapReduce_Studio
- Mutual_Friends_MapReduce_Application
- #Optional_Warm_Up
- #Simple_MapReduce_Framework
- Cholera_MapReduce_Application
- #Matrix_MapReduce_Framework
Optional Warm Up
WordCountConcreteStaticMapReduce
Mapper
method: static void map(TextSection textSection, BiConsumer<String, Integer> keyValuePairConsumer)
(sequential implementation only)
Reducer
method: static List<Integer> reduceCreateList()
(sequential implementation only)
method: static void reduceAccumulate(List<Integer> list, int v)
(sequential implementation only)
method: static void reduceCombine(List<Integer> a, List<Integer> b)
(sequential implementation only)
method: static int reduceFinish(List<Integer> list)
(sequential implementation only)
Framework
method: List<KeyValuePair<String, Integer>>[] mapAll(TextSection[] input)
(parallel implementation required)
method: static Map<String, List<Integer>> accumulateAll(List<KeyValuePair<String, Integer>>[] mapAllResults)
(sequential implementation only)
method: static Map<String, Integer> finishAll(Map<String, List<Integer>> accumulateAllResult)
(parallel implementation required)
Testing Your Solution
Correctness
class: | WarmUpWordCountMapReduceTestSuite.java | |
package: | mapreduce | |
source folder: | testing/src/test/java |
MutualFriendsConcreteStaticMapReduce
Mapper
method: static void map(Account account, BiConsumer<OrderedPair<AccountId>, Set<AccountId>> keyValuePairConsumer)
(sequential implementation only)
Reducer
method: static List<Set<AccountId>> reduceCreateList()
(sequential implementation only)
method: static void reduceAccumulate(List<Set<AccountId>> list, Set<AccountId> v)
(sequential implementation only)
method: static void reduceCombine(List<Set<AccountId>> a, List<Set<AccountId>> b)
(sequential implementation only)
method: static MutualFriendIds reduceFinish(List<Set<AccountId>> list)
(sequential implementation only)
Framework
method: static List<KeyValuePair<OrderedPair<AccountId>, Set<AccountId>>>[] mapAll(Account[] input)
(parallel implementation required)
method: static Map<OrderedPair<AccountId>, List<Set<AccountId>>> accumulateAll(List<KeyValuePair<OrderedPair<AccountId>, Set<AccountId>>>[] mapAllResults)
(sequential implementation only)
method: static Map<OrderedPair<AccountId>, MutualFriendIds> finishAll(Map<OrderedPair<AccountId>, List<Set<AccountId>>> accumulateAllResult)
(parallel implementation required)
Testing Your Solution
Correctness
class: | WarmUpMutualFriendsMapReduceTestSuite.java | |
package: | mapreduce | |
source folder: | testing/src/test/java |
Required Lab
Simple MapReduce Framework
class: | SimpleMapReduceFramework.java | |
methods: | mapAll accumulateAll finishAll |
|
package: | mapreduce.framework.lab.simple | |
source folder: | student/src/main/java |
Navigate to the SimpleMapReduceFramework.java
class and there will be three methods for you to complete: mapAll, accumulateAll, and finishAll. These frameworks are meant to be extremely general and applied to more specific uses of MapReduce. Whereas the warm ups for this lab serve to prepare you to build this required section of the lab, this simple framework is in many ways a warm up for the matrix implementation.
mapAll
method: List<KeyValuePair<K, V>>[] mapAll(E[] input)
(parallel implementation required)
With this method, you will map all of the elements of an array of data into a new array of equivalent size consisting of a list of key value pairs. In order to do this, you must define the map() method for the mapper by specifying that it should add a new key value pair to a previously empty list. This list should then be added to the array of lists you previously defined, therefore completing the mapping stage of MapReduce. This should all be done in parallel.
Hint: you should create an array of lists equivalent in size to the original array. Each list will contain all of the emitted (key,value) pairs for its item.
accumulateAll
method: Map<K, A> accumulateAll(List<KeyValuePair<K, V>>[] mapAllResults)
(sequential implementation only)
This middle step is often excluded in more advanced MapReduce applications. When run in parallel, it is the only step of the framework that must be completed sequentially. In the matrix framework implementation, we will do away with this step altogether for the sake of performance.
In this method, you will take in the array of lists you previously created and accumulate the key value pairs in the lists into a newly defined map. To help deal with this issue, you must make use of the Collector provided to you. More specifically, access the accumulator in the collector by calling the accumulator()
method and accept the key/value pair when you add it to the map. You probably noticed that the method must return a map of <K, A>, which differs from the <K, V> generics fed into the method. The framework is designed this way as the data originally fed into the mapping stage can be collected into a mutable container before reaching the finish/reduce stage. In order to access the correct value for the map if the key has no associated value yet, use the supplier associated with the Collector with the supplier()
method.
Hint: Look into the compute()
method for maps.
finishAll
method: Map<K, R> finishAll(Map<K, A> accumulateAllResult)
(parallel implementation required)
This final step reduces the accumulated data and returns the final map in its reduced form. Again, you may notice that the method returns a map of <K, R> instead of the <K, A> which was returned in the accumulateAll method. This happens for the exact same reason as the accumulateAll method, as the framework is designed to handle cases in which the reduced data differs in type from the accumulated data.
To reduce the data down, use the map returned from the accumulateAll stage and put the results of the reduction into a new map. The provided Collector will come in extremely handy for this stage, more specifically the finisher which can be called using the finisher()
method. This step should run in parallel and will probably be the easiest of the three methods.
NOTE: It is acceptable to build a contrived implementation of finishAll() with a ConcurrentHashMap. While all of the parallelism is nullified by the bottle necks created by the ConcurrentHashMap to keep it thread safe, we have deemed it better to move on to the matrix implementation than to worry about the hoops you would need to jump through to do this step truly in parallel. Reach out to an instructor for details on how to do it without the ConcurrentHashMap bottleneck, if you are interested.
Matrix MapReduce Framework
class: | MatrixMapReduceFramework.java | |
methods: | mapAndAccumulateAll combineAndFinishAll |
|
package: | mapreduce.framework.lab.matrix | |
source folder: | student/src/main/java |
Navigate to the MatrixMapReduceFramework.java
class and there will be two methods for you to complete: mapAndAccumulateAll and combineAndFinishAll. These frameworks are meant to be extremely general and applied to more specific uses of MapReduce.
The matrix framework is much more complex than the simple framework, but it boosts performance by grouping the map and accumulate stages so that everything can run in parallel. It does so by slicing up the given data into the specified mapTaskCount number of slices and assigns a reduce task number to each entry using the provided getReduceIndex() method. This, in effect, creates a matrix of maps, hence the name of the framework. In the combineAndFinishAll stage, the matrix comes in handy by allowing us to go directly down the matrix (as each key is essentially grouped into a bucket), combining and reducing elements all-in-one. This concept was explained in more depth during class.
mapAndAccumulateAll
method: Map<K, A>[][] mapAndAccumulateAll(E[] input)
(parallel implementation required)
In this stage, you will map and accumulate a given array of data into a matrix of maps. This method should run in parallel while combining the map and accumulate portions of the simple framework (which we recommend you attempt first). As mentioned previously, the input should be sliced into a mapTaskCount number of slices and then mapped/accumulated into its rightful spot in the matrix. Although you can slice up the data into chunks yourself, we recommend using the Slice
and Slices
classes introduced earlier in the course.
For each slice, the mapper should map the input into its rightful spot in the matrix and accumulate it into that specific map. Essentially, you will need to nestle the actions of the accumulate method into the mapper. In order to find where the input should go in the matrix, remember that each slice keeps track of its position relative to the other slices and the getReduceIndex method, mentioned above.
Hint: The number of rows should match the number of slices.
combineAndFinishAll
method: Map<K, R> combineAndFinishAll(Map<K, A>[][] input)
(parallel implementation required)
In this stage, you will take the matrix you just completed and combine all of the separate rows down to one array. Afterward, you will convert this combined array of maps into one final map. This method should run in parallel.
As mentioned previously, you should go directly down the matrix to access the same bucket across the different slices you created in the mapAndAccumulateAll step. For all of the maps in a column, you should go through each entry and combine it down into one row. You will need to make use of the Collector’s finisher again, but you will also need to make use of the combiner. You can access the Collector’s combiner using the combiner()
method. Although the combine step differs from the simple framework, the finish step should mirror what you did previously.
Hint: You can use the provided MultiWrapMap class to return the final row as a valid output. You should also combine before you finish.
Test Suite: MapReduceAssignmentTestSuite
Testing Your Solution
Correctness
There is a top-level test suite comprised of sub test suites which can be invoked separately when you want to focus on one part of the assignment.
top level
class: | FrameworksLabTestSuite.java | |
package: | mapreduce | |
source folder: | testing/src/test/java |
sub
class: | SimpleFrameworkTestSuite.java | |
package: | mapreduce | |
source folder: | testing/src/test/java |
class: | MatrixFrameworkTestSuite.java | |
package: | mapreduce | |
source folder: | testing/src/test/java |
Rubric
As always, please make sure to cite your work appropriately.
Total points: 100
Simple framework subtotal: 40
- Correct mapAll (10)
- Correct accumulateAll (20)
- Correct finishAll (10)
Matrix framework subtotal: 50
- Correct mapAndAccumulateAll (25)
- Correct combineAndFinishAll (25)
Whole project:
- Clarity and efficiency (10)