diff --git a/DOCUMENTATION.md b/DOCUMENTATION.md index 607f47ead..990656d38 100644 --- a/DOCUMENTATION.md +++ b/DOCUMENTATION.md @@ -417,6 +417,19 @@ The currently eight fixed workloads are: | **7** | Molecular property prediction | OGBG | GNN | CE | mAP | 0.28098 | 0.268729 | 18,477 | | **8** | Translation | WMT | Transformer | CE | BLEU | 30.8491 | 30.7219 | 48,151 | +Default Dropout Values for Different Workloads: + +| Workload | Dropout Values | +|------------------------|------------------------------------------------------------------------------------------------------| +| criteo 1tb | dropout_rate: 0.0 | +| fastmri | dropout_rate: 0.0 | +| imagenet_resnet | dropout not used | +| imagenet_vit | dropout_rate: 0.0 | +| librispeech_conformer | attention_dropout_rate: 0.0
attention_residual_dropout_rate: 0.1
conv_residual_dropout_rate: 0.0
feed_forward_dropout_rate: 0.0
feed_forward_residual_dropout_rate: 0.1
input_dropout_rate: 0.1 | +| librispeech_deepspeech | input_dropout_rate: 0.1
feed_forward_dropout_rate: 0.1
(Only for JAX - dropout_rate in CudnnLSTM class: 0.0) | +| ogbg | dropout_rate: 0.1 | +| wmt | dropout_rate: 0.1
attention_dropout_rate: 0.1 | + #### Randomized workloads In addition to the [fixed and known workloads](#fixed-workloads), there will also be randomized workloads in our benchmark. These randomized workloads will introduce minor modifications to a fixed workload (e.g. small model changes). The exact instances of these randomized workloads will only be created after the submission deadline and are thus unknown to both the submitters as well as the benchmark organizers. The instructions for creating them, i.e. providing a set or distribution of workloads to sample from, will be defined by this working group and made public with the call for submissions, to allow the members of this working group to submit as well as ensure that they do not possess any additional information compared to other submitters. We will refer to the unspecific workloads as *randomized workloads*, e.g. the set or distribution. The specific instance of such a randomized workload we call a *held-out workload*. That is, a held-out workload is a specific sample of a randomized workload that is used for one iteration of the benchmark. While we may reuse randomized workloads between iterations of the benchmark, new held-out workloads will be sampled for each new benchmark iteration. diff --git a/scoring/performance_profile.py b/scoring/performance_profile.py index 32acae9ab..f4f2d5679 100644 --- a/scoring/performance_profile.py +++ b/scoring/performance_profile.py @@ -274,7 +274,8 @@ def compute_performance_profiles(submissions, scale='linear', verbosity=0, strict=False, - self_tuning_ruleset=False): + self_tuning_ruleset=False, + output_dir=None): """Compute performance profiles for a set of submission by some time column. Args: @@ -321,6 +322,8 @@ def compute_performance_profiles(submissions, # Sort workloads alphabetically (for better display) df = df.reindex(sorted(df.columns), axis=1) + # Save time to target dataframe + df.to_csv(os.path.join(output_dir, 'time_to_targets.csv')) # For each held-out workload set to inf if the base workload is inf or nan for workload in df.keys(): if workload not in BASE_WORKLOADS: diff --git a/scoring/score_submissions.py b/scoring/score_submissions.py index 1fb39d193..8cc06b15f 100644 --- a/scoring/score_submissions.py +++ b/scoring/score_submissions.py @@ -210,7 +210,9 @@ def main(_): scale='linear', verbosity=0, self_tuning_ruleset=FLAGS.self_tuning_ruleset, - strict=FLAGS.strict) + strict=FLAGS.strict, + output_dir=FLAGS.output_dir, + ) if not os.path.exists(FLAGS.output_dir): os.mkdir(FLAGS.output_dir) performance_profile.plot_performance_profiles( diff --git a/setup.cfg b/setup.cfg index eb570dafb..4afefd164 100644 --- a/setup.cfg +++ b/setup.cfg @@ -121,7 +121,6 @@ jax_core_deps = chex==0.1.7 ml_dtypes==0.2.0 protobuf==4.25.3 - scipy==1.11.4 # JAX CPU