Difference between revisions of "Iterative Averaging"
Line 185: | Line 185: | ||
=Testing Your Solution= | =Testing Your Solution= | ||
==Correctness== | ==Correctness== | ||
− | {{TestSuite| | + | {{TestSuite|RigidIterativeAveragingTestSuite|iterativeaveraging.rigid.studio}} |
+ | :{{TestSuite|IterativeAveragingUtilsTestSuite|iterativeaveraging.util.studio}} | ||
+ | |||
+ | {TestSuite|ParallelIterativeAveragerTestSuite|iterativeaveraging.rigid.studio}} | ||
+ | |||
+ | {{TestSuite|PhasedIterativeAveragerTestSuite|iterativeaveraging.rigid.studio}} | ||
+ | |||
==Performance== | ==Performance== | ||
{{Performance|IterativeAveragingTiming|iterativeaveraging.studio}} | {{Performance|IterativeAveragingTiming|iterativeaveraging.studio}} |
Revision as of 05:25, 16 April 2020
Contents
Motivation
Iterative Averaging is the process of updating an array to so that each index becomes the average of the indices one before and one after it. After repeating this for many iterations, the array may converge to one set of numbers. For example, when given the following array, we can perform iterations of this algorithm until the array eventually converges:
[0] | [1] | [2] | [3] | [4] | [5] | [6] | [7] | [8] | [9] | [10] |
---|---|---|---|---|---|---|---|---|---|---|
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 |
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5 | 1.0 |
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.25 | 0.5 | 1.0 |
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.125 | 0.25 | 0.625 | 1.0 |
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0625 | 0.125 | 0.375 | 0.625 | 1.0 |
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.03125 | 0.0625 | 0.21875 | 0.375 | 0.6875 | 1.0 |
0.0 | 0.0 | 0.0 | 0.0 | 0.015625 | 0.03125 | 0.125 | 0.21875 | 0.453125 | 0.6875 | 1.0 |
0.0 | 0.0 | 0.0 | 0.0078125 | 0.015625 | 0.0703125 | 0.125 | 0.2890625 | 0.453125 | 0.7265625 | 1.0 |
0.0 | 0.0 | 0.00390625 | 0.0078125 | 0.0390625 | 0.0703125 | 0.1796875 | 0.2890625 | 0.5078125 | 0.7265625 | 1.0 |
0.0 | 0.001953125 | 0.00390625 | 0.021484375 | 0.0390625 | 0.109375 | 0.1796875 | 0.34375 | 0.5078125 | 0.75390625 | 1.0 |
0.0 | 0.001953125 | 0.01171875 | 0.021484375 | 0.0654296875 | 0.109375 | 0.2265625 | 0.34375 | 0.548828125 | 0.75390625 | 1.0 |
0.0 | 1.0 | |||||||||
0.0 | 1.0 | |||||||||
0.0 | 1.0 | |||||||||
0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |
X10-like Phasers have been added to Java since JDK7. We will gain some experience with using Phasers in a parallel for-loop context. Phasers allow us to change the structure of our loops and reduce overhead in the algorithm.
For more information on the algorithm and how we can use Phasers to make it better, review Topic 3.5 from the RiceX course.
Background
Check out the reference page on phasers
- bulkRegister
- arriveAndAwaitAdvance
- arriveAndDeregister (note: not required for this studio, but good to know about.)
Warning: Our use of the forall loop with Phasers does not accurately convey their finicky nature. More than other features, Phasers seem to require more care to get performance improvements. |
Code to Investigate
Code to Implement
Warmup
SequentialIterativeAverager
class: | SequentialIterativeAverager.java | |
methods: | iterativelyAverage | |
package: | iterativeaveraging.warmup | |
source folder: | student/src/main/java |
method: public double[] iterativelyAverage(double[] originalArray, int iterationCount)
(sequential implementation only)
PhaserWarmup
class: | PhaserWarmup.java | |
methods: | warmup warmupPhased |
|
package: | phaser.warmup | |
source folder: | student/src/main/java |
method: public static void warmup(List<Character> letters, List<Integer> digits)
(sequential implementation only)
method: public static void warmupPhased(List<Character> letters, List<Integer> digits)
(sequential implementation only)
Studio
IterativeAveragingUtils
sliceDoubleArrayIntoRangesForIterativeAveraging
Each IterativeAverager should slice the data into ranges. The constructor for each IterativeAverager is passed the number of slices to create.
Tip: Use Slices.createNSlices(min, maxExclusive, numSlices) |
createPhasableDoubleArraysForIterativeAveraging
Parallel
class: | ParallelIterativeAverager.java | |
methods: | iterativelyAverage | |
package: | iterativeaveraging.studio | |
source folder: | student/src/main/java |
method: public double[] iterativelyAverage(double[] originalArray, int iterationCount)
(parallel implementation required)
For this method, you should not be using Phasers. Instead, implement a parallel version of the Iterative Averaging algorithm shown above sequentially. Make use of the PhasableDoubleArrays class. The return value should be the current version of the array after it has gone through the number of iterations passed in the method.
sequential loop parallel loop work
PhasedParallel
class: | PhasedParallelIterativeAverager.java | |
methods: | iterativelyAverage | |
package: | iterativeaveraging.studio | |
source folder: | student/src/main/java |
method: public double[] iterativelyAverage(double[] originalArray, int iterationCount)
(parallel implementation required)
Before you get started on this, make sure you review the Background section in order to understand how to utilize Phasers (it will look different than the RiceX implementation!). This time, we will use Phasers to create a parallel version of the algorithm that has less overhead. Here are a few notes to keep in mind when working on this assignment:
- Think carefully as to how the loops in this version will be structured. Drawing out the computation graph may help
- Bulk registering a Phaser indicates how many times
phaser.arriveAndAwaitAdvance()
needs to be called before any of the threads are able to move on. Make sure to register the right number!
create phaser register phaser for each task parallel loop sequential loop work arrive and await advance on phaser
Optional Challenge
PointToPointPhasedParallel
class: | PointToPointPhasedParallelIterativeAverager.java | |
methods: | iterativelyAverage | |
package: | iterativeaveraging.challenge | |
source folder: | student/src/main/java |
FuzzyPhasedParallel
class: | FuzzyPhasedParallelIterativeAverager.java | |
methods: | iterativelyAverage | |
package: | iterativeaveraging.studio | |
source folder: | student/src/main/java |
method: public double[] iterativelyAverage(double[] originalArray, int iterationCount)
(parallel implementation required)
Which indices must be complete before neighboring tasks can proceed? Which indices have more flexibility?
create phaser register phaser for each task parallel loop sequential loop shared work arrive on phaser local work await advance (must specify the phase) on phaser
Testing Your Solution
Correctness
class: | RigidIterativeAveragingTestSuite.java | |
package: | iterativeaveraging.rigid.studio | |
source folder: | testing/src/test/java |
class: IterativeAveragingUtilsTestSuite.java package: iterativeaveraging.util.studio source folder: testing/src/test/java {TestSuite|ParallelIterativeAveragerTestSuite|iterativeaveraging.rigid.studio}}
class: PhasedIterativeAveragerTestSuite.java package: iterativeaveraging.rigid.studio source folder: testing/src/test/java Performance
class: IterativeAveragingTiming.java package: iterativeaveraging.studio source folder: src/main/java