Benchmarking Environment: Issueshttps://code.cor-lab.de/https://code.cor-lab.de/favicon.ico?14019720732013-09-04T18:41:21ZOpen Source Collaboration Platform
Redmine Enhancement #1609 (New): Examples should not be installed in $prefix/binhttps://code.cor-lab.de/issues/16092013-09-04T18:41:21ZJ. Moringenjmoringe@cor-lab.uni-bielefeld.deFeature #1603 (New): Export benchmark results in JMeter formathttps://code.cor-lab.de/issues/16032013-08-29T14:44:45ZAnonymous
<p>Apache JMeter format:<br /><a class="external" href="http://wiki.apache.org/jmeter/LogAnalysis#The_JMeter_Log_Format">http://wiki.apache.org/jmeter/LogAnalysis#The_JMeter_Log_Format</a></p> Feature #1010 (In Progress): Valgrind as intrinsic execution engine https://code.cor-lab.de/issues/10102012-06-19T22:04:23ZM. Rolfmrolf@cor-lab.uni-bielefeld.de
<p>Allow benchmark calls that execute valgrind in background. Something like<br /><pre>
MyBenchmarkExecutable --engine cpu
Benchcase [mu_suite:my_case]
Total (corrected) time: 1.3 s
Estimated cost per operation: 1.7 us
Operations per second: 5.9e5
</pre><br />vs. <br /><pre>
MyBenchmarkExecutable --engine valgrind
Benchcase [mu_suite:my_case]
Total (corrected) time: 1.3 GCycles
Estimated cost per operation: 1.7 MCycles
Operations per second: 5.9e5
</pre></p>
<p>The "--engine valgrind" must call valgrind in background, read and parse the logfile and transfer the results into the internal result data-structure.</p>
Some ideas...
<ul>
<li>Interally disable warmup and init-count (pointless when using valgrind)</li>
<li>Automatically reduce repitition-count, e.g. by factor 100</li>
</ul> Enhancement #948 (Feedback): Parametershttps://code.cor-lab.de/issues/9482012-03-14T12:55:41ZAnonymous
<p>We already have some parameters for our benchmarks (e.g. size hint, repetition count hint). For more possible parameters, we could have a look at the Japex parameter reference" <br /><a class="external" href="http://japex.java.net/docs/manual.html#Reference">http://japex.java.net/docs/manual.html#Reference</a><br />They define global as well as test-case-specific parameters.</p> Tasks #895 (Feedback): Unit supporthttps://code.cor-lab.de/issues/8952012-02-22T17:29:33ZAnonymous
<p>How do we want to handle units?<br />E.g. some of the measures are in seconds, others in microseconds or dimension-less (e.g. ops per second).</p> Tasks #826 (In Progress): Generate common benchmark outputhttps://code.cor-lab.de/issues/8262012-01-24T12:22:57ZAnonymous
<p>Output results in common benchmark output formats to support common benchmark (visualization) tools.</p> Tasks #824 (In Progress): Measure CPU time via callgrindhttps://code.cor-lab.de/issues/8242012-01-24T12:17:45ZAnonymous
<p>Measure CPU time via callgrind</p> Tasks #823 (In Progress): Executing entire benchmarkhttps://code.cor-lab.de/issues/8232012-01-24T12:17:22ZAnonymous
<p>Optionally compile benchmarks of several files into a single binary to be executed at once and get the consolidated results.</p> Tasks #822 (In Progress): Benchmark Baselineshttps://code.cor-lab.de/issues/8222012-01-24T12:16:32ZAnonymous
Optionally benchmark against <em>baselines</em>:
<ul>
<li>Either from 3rd-party libraries</li>
<li>Or from older benchmarking runs</li>
</ul>