Difference between revisions of "Iterative Averaging"

From CSE231 Wiki
Jump to navigation Jump to search
 
(37 intermediate revisions by 2 users not shown)
Line 36: Line 36:
 
| 0.0 || 0.1 || 0.2 || 0.3 || 0.4 || 0.5 || 0.6 || 0.7 || 0.8 || 0.9 || 1.0  
 
| 0.0 || 0.1 || 0.2 || 0.3 || 0.4 || 0.5 || 0.6 || 0.7 || 0.8 || 0.9 || 1.0  
 
|}
 
|}
 
 
<!--
 
<!--
  
Line 78: Line 77:
  
 
X10-like Phasers have been added to Java since JDK7.  We will gain some experience with using Phasers in a parallel for-loop context. Phasers allow us to change the structure of our loops and reduce overhead in the algorithm.
 
X10-like Phasers have been added to Java since JDK7.  We will gain some experience with using Phasers in a parallel for-loop context. Phasers allow us to change the structure of our loops and reduce overhead in the algorithm.
 
+
<!--
 
For more information on the algorithm and how we can use Phasers to make it better, review [https://edge.edx.org/courses/RiceX/COMP322/1T2014R/courseware/a900dd0655384de3b5ef01e508ea09d7/6349730bb2a149a0b33fa23db7afddee/13 Topic 3.5] from the RiceX course.
 
For more information on the algorithm and how we can use Phasers to make it better, review [https://edge.edx.org/courses/RiceX/COMP322/1T2014R/courseware/a900dd0655384de3b5ef01e508ea09d7/6349730bb2a149a0b33fa23db7afddee/13 Topic 3.5] from the RiceX course.
 +
-->
  
 
=Background=
 
=Background=
Line 85: Line 85:
  
 
==Java Phaser==
 
==Java Phaser==
 +
<youtube>whPAPmylbEU</youtube>
 +
 
[[Reference_Page#Phasers|Check out the reference page on phasers]]
 
[[Reference_Page#Phasers|Check out the reference page on phasers]]
  
 
[http://www.baeldung.com/java-phaser Guide to the Java Phaser]
 
[http://www.baeldung.com/java-phaser Guide to the Java Phaser]
  
*[https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Phaser.html class Phaser]
+
[https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Phaser.html class Phaser]
**[https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Phaser.html#bulkRegister-int- bulkRegister]
+
: [https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Phaser.html#bulkRegister-int- bulkRegister] (also see instructions on [[#ForkLoopWithPhaserIterativeAverager |ForkLoopWithPhaserIterativeAverager]])
**[https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Phaser.html#arriveAndAwaitAdvance-- arriveAndAwaitAdvance]
+
: [https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Phaser.html#arriveAndAwaitAdvance-- arriveAndAwaitAdvance]
**[https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Phaser.html#arriveAndDeregister-- arriveAndDeregister] (note: not required for this studio, but good to know about.)
 
  
 
{{warning | Our use of the forall loop with Phasers does not accurately convey their finicky nature.  More than other features, Phasers seem to require more care to get performance improvements. }}
 
{{warning | Our use of the forall loop with Phasers does not accurately convey their finicky nature.  More than other features, Phasers seem to require more care to get performance improvements. }}
Line 103: Line 104:
 
** [https://www.cse.wustl.edu/~cosgroved/courses/cse231/current/apidocs/edu/wustl/cse231s/phasable/PhasableDoubleArrays.html#%3Cinit%3E(double%5B%5D,java.util.function.Function) new PhasableDoubleArrays(originalData, initializerFunction)]
 
** [https://www.cse.wustl.edu/~cosgroved/courses/cse231/current/apidocs/edu/wustl/cse231s/phasable/PhasableDoubleArrays.html#%3Cinit%3E(double%5B%5D,java.util.function.Function) new PhasableDoubleArrays(originalData, initializerFunction)]
  
==Slices==
+
==Ranges==
*[https://www.cse.wustl.edu/~cosgroved/courses/cse231/current/apidocs/slice/studio/Slices.html class Slices]
+
[[Ranges|Ranges (Previous Exercise To Use)]]
**[https://www.cse.wustl.edu/~cosgroved/courses/cse231/current/apidocs/slice/studio/Slices.html#createNSlices(int,int,int) createNSlices(min, maxExclusive, numSlices)]
 
  
*[https://www.cse.wustl.edu/~cosgroved/courses/cse231/current/apidocs/slice/core/IndexedRange.html class IndexedRange]
+
=Lecture=
**[https://www.cse.wustl.edu/~cosgroved/courses/cse231/current/apidocs/range/core/IndexedRange.html#getMinInclusive()-- getMinInclusive()]
+
<youtube>0c2PZ-ARDDE</youtube>
**[https://www.cse.wustl.edu/~cosgroved/courses/cse231/current/apidocs/range/core/IndexedRange.html#getMaxExclusive()-- getMaxExclusive()]
 
**[https://www.cse.wustl.edu/~cosgroved/courses/cse231/current/apidocs/slice/core/IndexedRange.html#forEachIndex(edu.wustl.cse231s.v5.api.CheckedIntConsumer) forEachIndex(body)]
 
  
 
=Code to Implement=
 
=Code to Implement=
==Warmup==
+
==IterativeAveragingUtils==
===SequentialIterativeAverager===
+
{{CodeToImplement|IterativeAveragingUtils|slice<br/>createPhasableDoubleArrays|iterativeaveraging.util.exercise}}
<youtube>tn4AUX7FSj4</youtube>
 
  
{{CodeToImplement|SequentialIterativeAverager|iterativelyAverage|iterativeaveraging.warmup}}
+
===slice===
 +
<youtube>_taVHpi1Hpg</youtube>
  
{{Sequential|public double[] iterativelyAverage(double[] originalArray, int iterationCount)}}
+
Each IterativeAverager should slice the data into ranges.  The constructor for each IterativeAverager is passed the number of slices to create.
===PhaserWarmup===
 
<youtube>aa36t7oHY0g</youtube>
 
  
{{CodeToImplement|PhaserWarmup|warmup<br/>warmupPhased|phaser.warmup}}
+
{{Tip| Use Ranges.slice(min, maxExclusive, numSlices)}}
  
{{Sequential|public static void warmup(List<Character> letters, List<Integer> digits)}}
+
If you are to create 3 slices an array of length 11:
  
{{Sequential|public static void warmupPhased(List<Character> letters, List<Integer> digits)}}
+
[[File:IterativeAveraging.png|600px]]
  
==Studio==
+
the slices should have these properties:
===IterativeAveragingUtils===
 
{{CodeToImplement|IterativeAveragingUtils|sliceDoubleArrayIntoRangesForIterativeAveraging<br/>createPhasableDoubleArraysForIterativeAveraging|iterativeaveraging.util.studio}}
 
  
====sliceDoubleArrayIntoRangesForIterativeAveraging====
+
{|class="wikitable"
Each IterativeAverager should slice the data into ranges.  The constructor for each IterativeAverager is passed the number of slices to create.
+
|||sliceID||minInclusive||maxExclusive
 +
|-
 +
|sliceA||0||1||4
 +
|-
 +
|sliceB||1||4||7
 +
|-
 +
|sliceC||2||7||10
 +
|}
  
{{Tip| Use Slices.createNSlices(min, maxExclusive, numSlices)}}
+
===createPhasableDoubleArrays===
 +
<youtube>-SBcwicgyOE</youtube>
  
====createPhasableDoubleArraysForIterativeAveraging====
+
TL;DR: Initialize a double[] and make sure to copy over the original data's 0th and last index's value.
  
===Parallel===
+
==LoopOfForkLoopsIterativeAverager==
{{CodeToImplement|ParallelIterativeAverager|iterativelyAverage|iterativeaveraging.studio}}
+
{{CodeToImplement|LoopOfForkLoopsIterativeAverager|sliceCount<br/>iterativelyAverage|iterativeaveraging.exercise}}
  
 
{{Parallel|public double[] iterativelyAverage(double[] originalArray, int iterationCount)}}
 
{{Parallel|public double[] iterativelyAverage(double[] originalArray, int iterationCount)}}
  
For this method, you should not be using Phasers. Instead, implement a parallel version of the Iterative Averaging algorithm shown above sequentially. Make use of the PhasableDoubleArrays class. The return value should be the current version of the array after it has gone through the number of iterations passed in the method.
+
For this method, you should not be using Phasers. Instead, implement a parallel version of the [[Sequential_Iterative_Averager_Assignment|Sequential Iterative Averaging warm up]]. Make use of the PhasableDoubleArrays class. The return value should be the current version of the array after it has gone through the number of iterations passed in the method.
  
 
  <nowiki>sequential loop
 
  <nowiki>sequential loop
Line 151: Line 153:
 
         work</nowiki>
 
         work</nowiki>
  
===PhasedParallel===
+
 
{{CodeToImplement|PhasedParallelIterativeAverager|iterativelyAverage|iterativeaveraging.studio}}
+
What is the work for each task?
 +
 
 +
[[File:LoopOfForkLoops_IterativeAverager.svg|600px]]
 +
 
 +
==X10PhasedForkLoopIterativeAverager==
 +
 
 +
{{CodeToImplement|X10PhasedForkLoopIterativeAverager|x10<br/>sliceCount<br/>iterativelyAverage|iterativeaveraging.exercise}}
 +
 
 +
{{Parallel|public double[] iterativelyAverage(double[] originalArray, int iterationCount)}}
 +
 
 +
<nowiki>x10 phased parallel loop
 +
    sequential loop
 +
        work
 +
        arrive and await advance on phaser</nowiki>
 +
 
 +
What is the work for each iteration of a task?
 +
 
 +
[[File:X10PhasedForkLoop_IterativeAverager.svg|600px]]
 +
 
 +
==ForkLoopWithPhaserIterativeAverager==
 +
 
 +
{{CodeToImplement|ForkLoopWithPhaserIterativeAverager|phaserCreator<br/>sliceCount<br/>iterativelyAverage|iterativeaveraging.exercise}}
  
 
{{Parallel|public double[] iterativelyAverage(double[] originalArray, int iterationCount)}}
 
{{Parallel|public double[] iterativelyAverage(double[] originalArray, int iterationCount)}}
  
Before you get started on this, make sure you review the [[Iterative Averaging#Background | Background section]] in order to understand how to utilize Phasers (it will look different than the RiceX implementation!). This time, we will use Phasers to create a parallel version of the algorithm that has less overhead. Here are a few notes to keep in mind when working on this assignment:
+
Before you get started on this, make sure you review the [[Iterative Averaging#Background | Background section]] in order to understand how to utilize Phasers (it will look different than the X10 implementation). This time, we will use Phasers to create a parallel version of the algorithm that has less overhead. Here are a few notes to keep in mind when working on this assignment:
 
*Think carefully as to how the loops in this version will be structured. Drawing out the computation graph may help
 
*Think carefully as to how the loops in this version will be structured. Drawing out the computation graph may help
*Bulk registering a Phaser indicates how many times <code>phaser.arriveAndAwaitAdvance()</code> needs to be called before any of the threads are able to move on. Make sure to register the right number!
+
*Bulk registering a Phaser indicates how many times <code>phaser.arriveAndAwaitAdvance()</code> needs to be called before any of the threads are able to move on. In other words, it equals to how many threads need to arive at the checkpoint before moving on. Make sure to register the right number!
 
<!--
 
<!--
 
*You also bulk register the SwappableDoubleArrays when instantiating it. Similar to Phasers, the integer passed through in the constructor tells the object how many times <code>swap()</code> needs to be called before the source and destination arrays are actually swapped.
 
*You also bulk register the SwappableDoubleArrays when instantiating it. Similar to Phasers, the integer passed through in the constructor tells the object how many times <code>swap()</code> needs to be called before the source and destination arrays are actually swapped.
Line 170: Line 193:
 
         arrive and await advance on phaser</nowiki>
 
         arrive and await advance on phaser</nowiki>
  
==Optional Challenge==
+
What is the work for each iteration of a task?
===PointToPointPhasedParallel===
 
{{CodeToImplement|PointToPointPhasedParallelIterativeAverager|iterativelyAverage|iterativeaveraging.challenge}}
 
===FuzzyPhasedParallel===
 
{{CodeToImplement|FuzzyPhasedParallelIterativeAverager|iterativelyAverage|iterativeaveraging.studio}}
 
 
 
{{Parallel|public double[] iterativelyAverage(double[] originalArray, int iterationCount)}}
 
 
 
Which indices must be complete before neighboring tasks can proceed?  Which indices have more flexibility?
 
  
<nowiki>create phaser
+
[[File:ForkLoopWithPhaser_IterativeAverager.svg|600px]]
register phaser for each task
 
parallel loop
 
    sequential loop
 
        shared work
 
        arrive on phaser
 
        local work
 
        await advance (must specify the phase) on phaser</nowiki>
 
  
 
=Testing Your Solution=
 
=Testing Your Solution=
 
==Correctness==
 
==Correctness==
{{TestSuite|RigidIterativeAveragingTestSuite|iterativeaveraging.rigid.studio}}
+
{{TestSuite|__RigidIterativeAveragingTestSuite|iterativeaveraging.rigid.exercise}}
  
{{TestSuite|IterativeAveragingUtilsTestSuite|iterativeaveraging.util.studio}}
+
: {{TestSuite|_IterativeAveragingUtilsTestSuite|iterativeaveraging.util.exercise}}
  
{{TestSuite|ParallelIterativeAveragerTestSuite|iterativeaveraging.rigid.studio}}
+
{{TestSuite|_LoopOfForkLoopsIterativeAveragerTestSuite|iterativeaveraging.rigid.exercise}}
  
{{TestSuite|PhasedIterativeAveragerTestSuite|iterativeaveraging.rigid.studio}}
+
{{TestSuite|_X10PhasedForkLoopIterativeAveragerTestSuite|iterativeaveraging.rigid.exercise}}
  
==Performance==
+
{{TestSuite|_ForkLoopWithPhaserIterativeAveragerTestSuite|iterativeaveraging.rigid.exercise}}
{{Performance|IterativeAveragingTiming|iterativeaveraging.studio}}
 

Latest revision as of 18:27, 17 April 2024

Motivation

Iterative Averaging is the process of updating an array to so that each index becomes the average of the indices one before and one after it. After repeating this for many iterations, the array may converge to one set of numbers. For example, when given the following array, we can perform iterations of this algorithm until the array eventually converges:

[0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.25 0.5 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.125 0.25 0.625 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0625 0.125 0.375 0.625 1.0
0.0 0.0 0.0 0.0 0.0 0.03125 0.0625 0.21875 0.375 0.6875 1.0
0.0 0.0 0.0 0.0 0.015625 0.03125 0.125 0.21875 0.453125 0.6875 1.0
0.0 0.0 0.0 0.0078125 0.015625 0.0703125 0.125 0.2890625 0.453125 0.7265625 1.0
0.0 0.0 0.00390625 0.0078125 0.0390625 0.0703125 0.1796875 0.2890625 0.5078125 0.7265625 1.0
0.0 0.001953125 0.00390625 0.021484375 0.0390625 0.109375 0.1796875 0.34375 0.5078125 0.75390625 1.0
0.0 0.001953125 0.01171875 0.021484375 0.0654296875 0.109375 0.2265625 0.34375 0.548828125 0.75390625 1.0
0.0 1.0
0.0 1.0
0.0 1.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

X10-like Phasers have been added to Java since JDK7. We will gain some experience with using Phasers in a parallel for-loop context. Phasers allow us to change the structure of our loops and reduce overhead in the algorithm.

Background

Java Phaser

Check out the reference page on phasers

Guide to the Java Phaser

class Phaser

bulkRegister (also see instructions on ForkLoopWithPhaserIterativeAverager)
arriveAndAwaitAdvance
Attention niels epting.svg Warning: Our use of the forall loop with Phasers does not accurately convey their finicky nature. More than other features, Phasers seem to require more care to get performance improvements.

PhasableDoubleArrays

Ranges

Ranges (Previous Exercise To Use)

Lecture

Code to Implement

IterativeAveragingUtils

class: IterativeAveragingUtils.java Java.png
methods: slice
createPhasableDoubleArrays
package: iterativeaveraging.util.exercise
source folder: student/src/main/java

slice

Each IterativeAverager should slice the data into ranges. The constructor for each IterativeAverager is passed the number of slices to create.

Circle-information.svg Tip: Use Ranges.slice(min, maxExclusive, numSlices)

If you are to create 3 slices an array of length 11:

IterativeAveraging.png

the slices should have these properties:

sliceID minInclusive maxExclusive
sliceA 0 1 4
sliceB 1 4 7
sliceC 2 7 10

createPhasableDoubleArrays

TL;DR: Initialize a double[] and make sure to copy over the original data's 0th and last index's value.

LoopOfForkLoopsIterativeAverager

class: LoopOfForkLoopsIterativeAverager.java Java.png
methods: sliceCount
iterativelyAverage
package: iterativeaveraging.exercise
source folder: student/src/main/java

method: public double[] iterativelyAverage(double[] originalArray, int iterationCount) Parallel.svg (parallel implementation required)

For this method, you should not be using Phasers. Instead, implement a parallel version of the Sequential Iterative Averaging warm up. Make use of the PhasableDoubleArrays class. The return value should be the current version of the array after it has gone through the number of iterations passed in the method.

sequential loop
    parallel loop
        work


What is the work for each task?

LoopOfForkLoops IterativeAverager.svg

X10PhasedForkLoopIterativeAverager

class: X10PhasedForkLoopIterativeAverager.java Java.png
methods: x10
sliceCount
iterativelyAverage
package: iterativeaveraging.exercise
source folder: student/src/main/java

method: public double[] iterativelyAverage(double[] originalArray, int iterationCount) Parallel.svg (parallel implementation required)

x10 phased parallel loop
    sequential loop
        work
        arrive and await advance on phaser

What is the work for each iteration of a task?

X10PhasedForkLoop IterativeAverager.svg

ForkLoopWithPhaserIterativeAverager

class: ForkLoopWithPhaserIterativeAverager.java Java.png
methods: phaserCreator
sliceCount
iterativelyAverage
package: iterativeaveraging.exercise
source folder: student/src/main/java

method: public double[] iterativelyAverage(double[] originalArray, int iterationCount) Parallel.svg (parallel implementation required)

Before you get started on this, make sure you review the Background section in order to understand how to utilize Phasers (it will look different than the X10 implementation). This time, we will use Phasers to create a parallel version of the algorithm that has less overhead. Here are a few notes to keep in mind when working on this assignment:

  • Think carefully as to how the loops in this version will be structured. Drawing out the computation graph may help
  • Bulk registering a Phaser indicates how many times phaser.arriveAndAwaitAdvance() needs to be called before any of the threads are able to move on. In other words, it equals to how many threads need to arive at the checkpoint before moving on. Make sure to register the right number!
create phaser
register phaser for each task
parallel loop
    sequential loop
        work
        arrive and await advance on phaser

What is the work for each iteration of a task?

ForkLoopWithPhaser IterativeAverager.svg

Testing Your Solution

Correctness

class: __RigidIterativeAveragingTestSuite.java Junit.png
package: iterativeaveraging.rigid.exercise
source folder: testing/src/test/java
class: _IterativeAveragingUtilsTestSuite.java Junit.png
package: iterativeaveraging.util.exercise
source folder: testing/src/test/java
class: _LoopOfForkLoopsIterativeAveragerTestSuite.java Junit.png
package: iterativeaveraging.rigid.exercise
source folder: testing/src/test/java
class: _X10PhasedForkLoopIterativeAveragerTestSuite.java Junit.png
package: iterativeaveraging.rigid.exercise
source folder: testing/src/test/java
class: _ForkLoopWithPhaserIterativeAveragerTestSuite.java Junit.png
package: iterativeaveraging.rigid.exercise
source folder: testing/src/test/java