|
4 | 4 |
|
5 | 5 | 1. What is `AutoEmulate`? |
6 | 6 | <!-- A brief description of what the package does, its main features, and its intended use case. --> |
7 | | - - A Python package that makes it easy to build emulators for complex simulations. It takes a set of simulation inputs `X` and outputs `y`, and automatically fits, optimises and evaluates various machine learning models to find the best emulator model. The emulator model can then be used as a drop-in replacement for the simulation, but will be much faster and computationally cheaper to evaluate. |
| 7 | + - A Python package that makes it easy to create emulators for complex simulations. It takes a set of simulation inputs `X` and outputs `y`, and automatically fits, optimises and evaluates various machine learning models to find the best emulator model. The emulator model can then be used as a drop-in replacement for the simulation, but will be much faster and computationally cheaper to evaluate. We have also implemented global sensitivity analysis as a common emulator application and working towards making `AutoEmulate` a true end-to-end package for building emulators. |
8 | 8 |
|
9 | | -2. How do I install `AutoEmulate`? |
10 | | - <!-- Step-by-step instructions on installing the package, including any dependencies that might be required. --> |
11 | | - - See the [installation guide](../../getting-started/installation.md) for detailed instructions. |
| 9 | +2. How do I know whether `AutoEmulate` is the right tool for me? |
| 10 | + - You need to build an emulator for a simulation. |
| 11 | + - You want to do global sensitivity analysis |
| 12 | + - Your inputs `X` and outputs `y` are numeric and complete (we don't support missing data yet). |
| 13 | + - You have one or more input parameters and one or more output variables. |
| 14 | + - You have a small-ish dataset in the order of hundreds to few thousands of samples. All default emulator parameters and search spaces are optimised for smaller datasets. |
12 | 15 |
|
13 | | -3. What are the prerequisites for using `AutoEmulate`? |
14 | | - <!-- Information on the knowledge or data required to effectively use AutoEmulate, such as familiarity with Python, machine learning concepts, or specific data formats. --> |
15 | | - - `AutoEmulate` is designed to be easy to use. The user has to first generate a dataset of simulation inputs `X` and outputs `y`, and optimally have a basic understanding of Python and machine learning concepts. |
| 16 | +3. Does `AutoEmulate` support multi-output data? |
| 17 | + - Yes, all models support multi-output data. Some do so natively, others are wrapped in a `MultiOutputRegressor`, which fits one model per target variable. |
16 | 18 |
|
17 | | -## Usage Questions |
18 | | - |
19 | | -1. How do I start using `AutoEmulate` with my simulation? |
20 | | - <!-- A simple example to get a new user started, possibly pointing to more detailed tutorials or documentation. --> |
21 | | - - See the [getting started guide](../../getting-started/quickstart.ipynb) or a more [in-depth tutorial](../../tutorials/01_start.ipynb). |
22 | | - |
23 | | -2. What kind of data does `AutoEmulate` need to build an emulator? |
24 | | - <!-- Clarification on the types of datasets suitable for analysis, including data formats and recommended data sizes. --> |
25 | | - |
26 | | - - `AutoEmulate` takes simulation inputs `X` and simulation outputs `y` to build an emulator.`X` is an ndarray of shape `(n_samples, n_parameters)` and `y` is an ndarray of shape `(n_samples, n_outputs)`. Each sample here is a simulation run, so each row of `X` corresponds to a set of input parameters and each row of `y` corresponds to the corresponding simulation output. Currently, all inputs and outputs should be numeric, and we don't support missing data. |
| 19 | +4. Does `AutoEmulate` support temporal or spatial data? |
| 20 | + - Not explicitly. The train-test split just takes a random subset as a test set, so does KFold cross-validation. |
27 | 21 |
|
28 | | - - All models work with multi-output data. We have optimised `AutoEmulate` to work with smaller datasets (in the order of hundreds to thousands of samples). Training emulators with large datasets (hundreds of thousands of samples) may currently require a long time and is not recommended. |
| 22 | +5. `AutoEmulate` takes a long time to run on my dataset, why? |
| 23 | + - The package fits a lot of models, in particular when hyperparameters are optimised. With say 8 default models and 5-fold cross-validation, this amounts to 40 model fits. With the addition of hyperparameter optimisation (n_iter=20), this results in 800 model fits. Some models such as Gaussian Processes and Neural Processes will take a long time to run on a CPU. However, don't despair! There is a [speeding up AutoEmulate guide](../../tutorials/02_speed.ipynb). As a rule of thumb, if your dataset is smaller than 1000 samples, you should be fine, if it's larger and you want to optimise hyperparameters, you might want to read the guide. |
29 | 24 |
|
30 | | -3. How do I interpret the results from `AutoEmulate`? |
31 | | - <!-- Guidance on understanding the output of the software, including any metrics or visualizations it produces. --> |
32 | | - - See the [tutorial](../../tutorials/01_start.ipynb) for an example of how to interpret the results from `AutoEmulate`. Briefly, `X` and `y` are first split into training and test sets. Cross-validation and/or hyperparameter optimisation are performed on the training data. After comparing the results from different emulators, the user can evaluate the chosen emulator on the test set with `AutoEmulate.evaluate_model()`, and plot test set predictions with `AutoEmulate.plot_model()`, see [autoemulate.compare](../../reference/compare.rst) module for details. |
| 25 | +## Usage Questions |
33 | 26 |
|
34 | | - - An important thing to note is that the emulator can only be as good as the data it was trained on. Therefore, the experimental design (on which points the simulation was evaluated) is key to obtaining a good emulator. |
| 27 | +1. What data do I need to provide to `AutoEmulate` to build an emulator? |
| 28 | + <!-- A simple example to get a new user started, possibly pointing to more detailed tutorials or documentation. --> |
| 29 | + - You'll need two input objects: `X` and `y`. `X` is an ndarray / Pandas DataFrame of shape `(n_samples, n_parameters)` and `y` is an ndarray / Pandas DataFrame of shape `(n_samples, n_outputs)`. Each sample here is a simulation run, so each row of `X` corresponds to a set of input parameters and each row of `y` corresponds to the corresponding simulation output. You'll usually have created `X` using Latin hypercube sampling or similar methods, and `y` by running the simulation on these `X` inputs. |
35 | 30 |
|
36 | | -4. Can I use `AutoEmulate` for commercial purposes? |
| 31 | +2. Can I use `AutoEmulate` for commercial purposes? |
37 | 32 | <!-- Information on licensing and any restrictions on use. --> |
38 | 33 | - Yes. It's licensed under the MIT license, which allows for commercial use. See the [license](../../../LICENSE) for more information. |
39 | 34 |
|
40 | 35 | ## Advanced Usage |
41 | 36 |
|
42 | 37 | 1. Does AutoEmulate support parallel processing or high-performance computing (HPC) environments? |
43 | 38 | <!-- Details on the software's capabilities to leverage multi-threading, distributed computing, or HPC resources to speed up computations. --> |
44 | | - - Yes, [AutoEmulate.setup()](../../reference/compare.rst) has an `n_jobs` parameter which allows to parallelise cross-validation and hyperparameter optimisation. |
| 39 | + - Yes, [AutoEmulate.setup()](../../reference/compare.rst) has an `n_jobs` parameter which allows to parallelise cross-validation and hyperparameter optimisation. We are also working on GPU support for some models. |
45 | 40 |
|
46 | 41 | 2. Can AutoEmulate be integrated with other data analysis or simulation tools? |
47 | 42 | <!-- Information on APIs, file formats, or protocols that facilitate the integration of AutoEmulate with other software ecosystems. --> |
48 | | - - `AutoEmulate` takes simple `X` and `y` ndarrays as input, and returns emulator models that can be saved and loaded with `joblib`. All emulators are written as scikit learn estimators, so they can be used like any other scikit learn model in a pipeline. |
| 43 | + - `AutoEmulate` takes simple `X` and `y` ndarrays as input, and returns emulators which are [scikit-learn estimators](https://scikit-learn.org/1.5/developers/develop.html), that can be saved and loaded, and used like any other scikit-learn model. |
49 | 44 |
|
50 | 45 | ## Data Handling |
51 | 46 |
|
52 | 47 | 1. What are the best practices for data preprocessing before using `AutoEmulate`? |
53 | 48 | <!-- Tips and recommendations on preparing data, including normalisation, dealing with missing values, or data segmentation. --> |
54 | | - - The user will typically run their simulation on a selected set of input parameters (-> experimental design) using a latin hypercube or other sampling method. `AutoEmulate` currently needs all inputs to be numeric and we don't support missing data. By default, `AutoEmulate` will scale the input data to zero mean and unit variance, and there's the option to do dimensionality reduction in `setup()`. |
55 | | - |
56 | | -2. How does AutoEmulate handle large datasets? |
57 | | - <!-- Advice on managing large-scale data analyses, potential memory management features, or ways to streamline processing. --> |
58 | | - - `AutoEmulate` is optimised to work with smaller datasets (in the order of hundreds to thousands of samples). Training emulators with large datasets (hundreds of thousands of samples) may currently require a long time and is not recommended. Emulators are created because it's expensive to evaluate the simulation, so we expect most users to have a relatively small dataset. |
| 49 | + - The user will typically run their simulation on a selected set of input parameters (-> experimental design) using a latin hypercube or other sampling method. `AutoEmulate` currently needs all inputs to be numeric and we don't support missing data. By default, `AutoEmulate` will scale the input data to zero mean and unit variance, and for some models it will also scale the output data. There's also the option to do dimensionality reduction in `setup()`. |
59 | 50 |
|
60 | 51 | ## Troubleshooting |
61 | 52 |
|
62 | 53 | 1. What common issues might I encounter when using `AutoEmulate`, and how can I solve them? |
63 | 54 | <!-- A list of frequently encountered problems with suggested solutions, possibly linked to a more extensive troubleshooting guide. --> |
64 | 55 | - `AutoEmulate.setup()` has a `log_to_file` option to log all warnings/errors to a file. It also has a `verbose` option to print more information to the console. If you encounter an error, please open an issue (see below). |
65 | | - |
| 56 | + - One common issue is that the Jupyter notebook kernel crashes when running `compare()` in parallel, often due to `LightGBM`. In this case, we recommend either specifying `n_jobs=1` or selecting specific (non-LightGBM) models in `setup()` with the `models` parameter. |
66 | 57 | 2. How can I report a bug or request a feature in `AutoEmulate`? |
67 | 58 | <!-- Instructions on the proper channels for reporting issues or suggesting enhancements, including any templates or information to include. --> |
68 | 59 | - You can report a bug or request a new feature through the [issue templates](https://github.com/alan-turing-institute/autoemulate/issues/new/choose) in our GitHub repository. Head on over there and choose one of the templates for your purpose and get started. |
|
71 | 62 |
|
72 | 63 | 1. Are there any community projects or collaborations using `AutoEmulate` I can join or learn from? |
73 | 64 | <!-- Information on community-led projects, study groups, or collaborative research initiatives involving AutoEmulate. --> |
74 | | - - Reach out to Martin ( [email ](mailto:[email protected])) or Kalle ( [email ](mailto:kwesterline@turing.ac.uk)) for more information. |
| 65 | + - Reach out to Martin ( [email ](mailto:[email protected])) or Sophie ( [email ](mailto:sarana@turing.ac.uk)) for more information. |
75 | 66 |
|
76 | 67 | 2. Where can I find tutorials or case studies on using `AutoEmulate`? |
77 | 68 | <!-- Directions to comprehensive learning materials, such as video tutorials (if we want to record that), written guides, or published research papers using AutoEmulate. --> |
78 | | - - See the [tutorial](../../tutorials/01_start.ipynb) for a comprehensive guide on using the package. |
| 69 | + - See the [tutorial](../../tutorials/01_start.ipynb) for a comprehensive guide on using the package. Case studies are coming soon. |
79 | 70 |
|
80 | 71 | 3. How can I stay updated on new releases or updates to AutoEmulate? |
81 | 72 | <!-- Guidance on subscribing to newsletters when/if we will have that, community calls if we start that, following the project on social media if we want to create those platforms, or joining community forums/Slack once we have that ready... --> |
82 | 73 | - Watch the [AutoEmulate repository](https://github.com/alan-turing-institute/autoemulate). |
83 | 74 |
|
84 | 75 | 4. What support options are available if I need help with AutoEmulate? |
85 | 76 | <!-- Overview of support resources, including documentation, community forums/Slack when we have that ready... --> |
86 | | - - Please open an issue or contact the maintainer ( [email ](mailto:[email protected])) directly. |
| 77 | + - Please open an issue on GitHub or contact the maintainer ( [email ](mailto:[email protected])) directly. |
0 commit comments