You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -63,17 +63,18 @@ explore various ways to integrate the two methodologies:
63
63
64
64
## Breaking Changes
65
65
66
-
### v4 (upcoming)
66
+
### v4
67
67
68
68
-`TensorLayer` has been removed, use `Boltz.Layers.TensorProductLayer` instead.
69
69
- Basis functions in DiffEqFlux have been removed in favor of `Boltz.Basis` module.
70
70
-`SplineLayer` has been removed, use `Boltz.Layers.SplineLayer` instead.
71
71
-`NeuralHamiltonianDE` has been removed, use `NeuralODE` with `Layers.HamiltonianNN` instead.
72
72
-`HamiltonianNN` has been removed in favor of `Layers.HamiltonianNN`.
73
+
-`Lux` and `Boltz` are updated to v1.
73
74
74
75
### v3
75
76
76
-
- Flux dependency is dropped. If a non Lux `AbstractExplicitLayer` is passed we try to automatically convert it to a Lux model with `FromFluxAdaptor()(model)`.
77
+
- Flux dependency is dropped. If a non Lux `AbstractLuxLayer` is passed we try to automatically convert it to a Lux model with `FromFluxAdaptor()(model)`.
77
78
-`Flux` is no longer re-exported from `DiffEqFlux`. Instead we reexport `Lux`.
78
79
-`NeuralDAE` now allows an optional `du0` as input.
pred = Array(first(model(data[:, 1], ps_trained, st)))
@@ -97,10 +97,10 @@ dataloader = ncycle(
97
97
98
98
### Training the HamiltonianNN
99
99
100
-
We parameterize the HamiltonianNN with a small MultiLayered Perceptron. HNNs are trained by optimizing the gradients of the Neural Network. Zygote currently doesn't support nesting itself, so we will be using ForwardDiff in the training loop to compute the gradients of the HNN Layer for Optimization.
100
+
We parameterize the with a small MultiLayered Perceptron. HNNs are trained by optimizing the gradients of the Neural Network. Zygote currently doesn't support nesting itself, so we will be using ForwardDiff in the training loop to compute the gradients of the HNN Layer for Optimization.
In order to visualize the learned trajectories, we need to solve the ODE. We will use the `NeuralHamiltonianDE` layer, which is essentially a wrapper over `HamiltonianNN` layer, and solves the ODE.
130
+
In order to visualize the learned trajectories, we need to solve the ODE. We will use the
131
+
`NeuralODE` layer with `HamiltonianNN` layer, and solves the ODE.
@@ -152,7 +152,7 @@ using GraphNeuralNetworks, DifferentialEquations
152
152
using DiffEqFlux: NeuralODE
153
153
using GraphNeuralNetworks.GNNGraphs: normalized_adjacency
154
154
using Lux, NNlib, Optimisers, Zygote, Random, ComponentArrays
155
-
using Lux:AbstractExplicitLayer, glorot_normal, zeros32
155
+
using Lux:AbstractLuxLayer, glorot_normal, zeros32
156
156
import Lux: initialparameters, initialstates
157
157
using SciMLSensitivity
158
158
using Statistics: mean
@@ -207,10 +207,10 @@ epochs = 20
207
207
208
208
## Define the Graph Neural Network
209
209
210
-
Here, we define a type of graph neural networks called `GCNConv`. We use the name `ExplicitGCNConv` to avoid naming conflicts with `GraphNeuralNetworks`. For more information on defining a layer with `Lux`, please consult to the [doc](http://lux.csail.mit.edu/dev/introduction/overview/#AbstractExplicitLayer-API).
210
+
Here, we define a type of graph neural networks called `GCNConv`. We use the name `ExplicitGCNConv` to avoid naming conflicts with `GraphNeuralNetworks`. For more information on defining a layer with `Lux`, please consult to the [doc](http://lux.csail.mit.edu/dev/introduction/overview/#AbstractLuxLayer-API).
0 commit comments