You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Rename benchmarks to allBenchmarks
* User can download resources per benchmark
* Expand click area of download status to text
* Refactor function signature for more clarity
* Add checksum validation before running a benchmark.
* Fix Flutter tests
* Update SONAR_SCANNER_VERSION
Copy file name to clipboardExpand all lines: flutter/lib/l10n/app_en.arb
+3Lines changed: 3 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -93,6 +93,7 @@
93
93
"dialogContentMissingFiles": "The following files don't exist:",
94
94
"dialogContentMissingFilesHint": "Please go to the menu Resources to download the missing files.",
95
95
"dialogContentChecksumError": "The following files failed checksum validation:",
96
+
"dialogContentChecksumErrorHint": "Please go to the menu Resources to clear the cache and download the files again.",
96
97
"dialogContentNoSelectedBenchmarkError": "Please select at least one benchmark.",
97
98
98
99
"benchModePerformanceOnly": "Performance Only",
@@ -122,7 +123,9 @@
122
123
"benchInfoStableDiffusionDesc": "The Text to Image Gen AI benchmark adopts Stable Diffusion v1.5 for generating images from text prompts. It is a latent diffusion model. The benchmarked Stable Diffusion v1.5 refers to a specific configuration of the model architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet,123M CLIP ViT-L/14 text encoder for the diffusion model, and VAE Decoder of 49.5M parameters. The model was trained on 595k steps at resolution of 512x512, which enables it to generate high quality images. We refer you to https://huggingface.co/benjamin-paine/stable-diffusion-v1-5 for more information. The benchmark runs 20 denoising steps for inference, and uses a precalculated time embedding of size 1x1280. Reference models can be found here https://github.com/mlcommons/mobile_open/releases.\n\nFor latency benchmarking, we benchmark end to end, excluding the time embedding calculation and the tokenizer. For accuracy calculations, the app adopts the CLIP metric for text-to-image consistency, and further evaluation of the generated images using this Image Quality Aesthetic Assessment metric https://github.com/idealo/image-quality-assessment/tree/master?tab=readme-ov-file",
123
124
124
125
"resourceDownload": "Download",
126
+
"resourceDownloadAll": "Download all",
125
127
"resourceClear": "Clear",
128
+
"resourceClearAll": "Clear all",
126
129
"resourceChecking": "Checking download status",
127
130
"resourceDownloading": "Downloading",
128
131
"resourceErrorMessage": "Some resources failed to load.\nIf you didn't change config from default you can try clearing the cache.\nIf you use a custom configuration file ensure that it has correct structure or switch back to default config.",
0 commit comments