I started multi threading DefUse.computeDU
Optimizations (in Simple.java)
- // Compute defList, useList, useCount fields for each register.
DefUse.computeDU(ir); - // Recompute isSSA flags
DefUse.recomputeSSA(ir); - / Simple copy propagation.
// This pass incrementally updates the register list.
copyPropagation(ir); - // Simple type propagation.
// This pass uses the register list, but doesn't modify it.
if (typeProp) {
typePropagation(ir);
} - // Perform simple bounds-check and arraylength elimination.
// This pass incrementally updates the register list
if (foldChecks) {
arrayPropagation(ir);
} - // Simple dead code elimination.
// This pass incrementally updates the register list
eliminateDeadInstructions(ir); - // constant folding
// This pass usually doesn't modify the DU, but
// if it does it will recompute it.
foldConstants(ir);
// Simple local expression folding respecting DU
if (ir.options.LOCAL_EXPRESSION_FOLDING && ExpressionFolding.performLocal(ir)) {
// constant folding again
foldConstants(ir);
} - // Try to remove conditional branches with constant operands
// If it actually constant folds a branch,
// this pass will recompute the DU
if (foldBranches) {
simplifyConstantBranches(ir);
} - // Should we sort commutative use operands
if (sortRegisters) {
sortCommutativeRegisterUses(ir);
}
Our ComputeDU Optimization Mutlithread's on a Register Level Unfortunately that Granularity is way to fine and we see overhead from runnable objects/ threads
Although we are faced with a few cases where we actually benefit (kinda) see 3/4 transition
Original Execution Time: 108715 nano Seconds Instructs: 821 Method: deleteTree
Execution Time: 556187 nano Seconds Instructs: 821 Method: deleteTree 2 Threads
Execution Time: 24728977 nano Seconds Instructs: 821 Method: deleteTree 3 Threads
Execution Time: 14231161 nano Seconds Instructs: 821 Method: deleteTree 4 Threads
for I in {1..3}; do ./rvm -X:aos:initial_compiler=opt -classpath dacapo-2006-10-MR2.jar Harness fop; done
Graph Axes will be fixed and dimensions will be lined up
No comments:
Post a Comment