Skip to content

Commit 52a0294

Browse files
authored
dependency updates (#95)
* dependency updates * reverted changes in notebook * cleared notebook to enable build process for docu * clean up dead references in docu * updated readme * removed dor * adjustments because of dependency changes * check for ReverseDiffAdjoint() * added testing benchmark for different sensitivities * Quadrature Adjoint test * removed dead line * cleaned up tests * more decimals for gradient printing * added different gradient tests * fixed runtest * test fix * test new sensitivity
1 parent e530e69 commit 52a0294

19 files changed

+301
-1994
lines changed

Project.toml

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,8 @@
11
name = "FMIFlux"
22
uuid = "fabad875-0d53-4e47-9446-963b74cae21f"
3-
version = "0.10.5"
3+
version = "0.10.6"
44

55
[deps]
6-
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
76
Colors = "5ae59095-9a9b-59fe-a467-6f913c188581"
87
DiffEqCallbacks = "459566f4-90b8-5000-8ac3-15dfb0a30def"
98
DifferentiableEigen = "73a20539-4e65-4dcb-a56d-dc20f210a01b"
@@ -18,13 +17,12 @@ Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
1817
ThreadPools = "b189fb0b-2eb5-4ed4-bc0c-d34c51242431"
1918

2019
[compat]
21-
ChainRulesCore = "1.16.0"
2220
Colors = "0.12.8"
2321
DiffEqCallbacks = "2.26.0"
2422
DifferentiableEigen = "0.2.0"
2523
DifferentialEquations = "7.8.0"
26-
FMIImport = "0.15.7"
27-
Flux = "0.13.17"
24+
FMIImport = "0.15.8"
25+
Flux = "0.13, 0.14"
2826
Optim = "1.7.0"
2927
ProgressMeter = "1.7.0"
3028
Requires = "1.3.0"

README.md

Lines changed: 17 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,12 @@
22
# FMIFlux.jl
33

44
## What is FMIFlux.jl?
5-
[*FMIFlux.jl*](https://github.com/ThummeTo/FMIFlux.jl) is a free-to-use software library for the Julia programming language, which offers the ability to set up NeuralFMUs just like NeuralODEs: You can place FMUs ([fmi-standard.org](http://fmi-standard.org/)) simply inside any feed-forward ANN topology and still keep the resulting hybrid model trainable with a standard (or custom) FluxML training process.
5+
[*FMIFlux.jl*](https://github.com/ThummeTo/FMIFlux.jl) is a free-to-use software library for the Julia programming language, which offers the ability to simply place your FMU ([fmi-standard.org](http://fmi-standard.org/)) everywhere inside of your ML topologies and still keep the resulting models trainable with a standard (or custom) FluxML training process. This includes for example:
6+
- NeuralODEs including FMUs, so called *Neural Functional Mock-up Units* (NeuralFMUs):
7+
You can place FMUs inside of your ML topology.
8+
- PINNs including FMUs, so called *Functional Mock-Up Unit informed Neural Networks* (FMUINNs):
9+
You can evaluate FMUs inside of your loss function.
10+
611

712
[![Dev Docs](https://img.shields.io/badge/docs-dev-blue.svg)](https://ThummeTo.github.io/FMIFlux.jl/dev)
813
[![Test (latest)](https://github.com/ThummeTo/FMIFlux.jl/actions/workflows/TestLatest.yml/badge.svg)](https://github.com/ThummeTo/FMIFlux.jl/actions/workflows/TestLatest.yml)
@@ -12,40 +17,43 @@
1217
[![Run PkgEval](https://github.com/ThummeTo/FMIFlux.jl/actions/workflows/Eval.yml/badge.svg)](https://github.com/ThummeTo/FMIFlux.jl/actions/workflows/Eval.yml)
1318
[![Coverage](https://codecov.io/gh/ThummeTo/FMIFlux.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/ThummeTo/FMIFlux.jl)
1419
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor's%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
15-
20+
[![FMIFlux Downloads](https://shields.io/endpoint?url=https://pkgs.genieframework.com/api/v1/badge/FMIFlux)](https://pkgs.genieframework.com?packages=FMIFlux)
1621

1722
## How can I use FMIFlux.jl?
1823

1924
1\. Open a Julia-REPL, switch to package mode using `]`, activate your preferred environment.
2025

2126
2\. Install [*FMIFlux.jl*](https://github.com/ThummeTo/FMIFlux.jl):
2227
```julia-repl
23-
(@v1.x) pkg> add FMIFlux
28+
(@v1) pkg> add FMIFlux
2429
```
2530

2631
3\. If you want to check that everything works correctly, you can run the tests bundled with [*FMIFlux.jl*](https://github.com/ThummeTo/FMIFlux.jl):
2732
```julia-repl
28-
(@v1.x) pkg> test FMIFlux
33+
(@v1) pkg> test FMIFlux
2934
```
3035

3136
4\. Have a look inside the [examples folder](https://github.com/ThummeTo/FMIFlux.jl/tree/examples/examples) in the examples branch or the [examples section](https://thummeto.github.io/FMIFlux.jl/dev/examples/overview/) of the documentation. All examples are available as Julia-Script (*.jl*), Jupyter-Notebook (*.ipynb*) and Markdown (*.md*).
3237

3338
## What is currently supported in FMIFlux.jl?
34-
- building and training ME-NeuralFMUs (event-handling is supported) with the default Flux-Front-End
35-
- building and training CS-NeuralFMUs with the default Flux-Front-End
36-
- different AD-frameworks: ForwardDiff.jl, ReverseDiff.jl (default setting) and Zygote.jl
39+
- building and training ME-NeuralFMUs (NeuralODEs) with support for event-handling (*DiffEqCallbacks.jl*) and discontinuous sensitivity analysis (*SciMLSensitivity.jl*)
40+
- building and training CS-NeuralFMUs
41+
- building and training NeuralFMUs consisiting of multiple FMUs
42+
- building and training FMUINNs (PINNs)
43+
- different AD-frameworks: ForwardDiff.jl (CI-tested), ReverseDiff.jl (CI-tested, default setting), FiniteDiff.jl (not CI-tested) and Zygote.jl (not CI-tested)
3744
- ...
3845

3946
## What is under development in FMIFlux.jl?
4047
- performance optimizations
4148
- improved documentation
4249
- more examples
50+
- FMI3 integration
4351
- ...
4452

4553
## What Platforms are supported?
46-
[*FMIFlux.jl*](https://github.com/ThummeTo/FMIFlux.jl) is tested (and testing) under Julia versions *1.6* (LTS) and *1.8* (latest) on Windows (latest) and Ubuntu (latest). MacOS should work, but untested.
54+
[*FMIFlux.jl*](https://github.com/ThummeTo/FMIFlux.jl) is tested (and testing) under Julia versions *v1.6* (LTS) and *v1* (latest) on Windows (latest) and Ubuntu (latest). MacOS should work, but untested.
4755
[*FMIFlux.jl*](https://github.com/ThummeTo/FMIFlux.jl) currently only works with FMI2-FMUs.
48-
All shipped examples are automatically tested under Julia version *1.8* (latest) on Windows (latest).
56+
All shipped examples are automatically tested under Julia version *v1* (latest) on Windows (latest).
4957

5058
## What FMI.jl-Library should I use?
5159
![FMI.jl Family](https://github.com/ThummeTo/FMI.jl/blob/main/docs/src/assets/FMI_JL_family.png?raw=true "FMI.jl Family")

docs/make.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,9 @@ makedocs(sitename="FMIFlux.jl",
1919
"Simple CS-NeuralFMU" => "examples/simple_hybrid_CS.md"
2020
"Simple ME-NeuralFMU" => "examples/simple_hybrid_ME.md"
2121
"Advanced ME-NeuralFMU" => "examples/advanced_hybrid_ME.md"
22+
"JuliaCon 2023" => "examples/julicon_2023.md"
23+
"MDPI 2022" => "examples/mdpi_2022.md"
2224
"Modelica Conference 2021" => "examples/modelica_conference_2021.md"
23-
"Physics-enhanced NeuralODEs in real-world applications" => "examples/mdpi_2022.md"
2425
]
2526
"FAQ" => "faq.md"
2627
"Library Functions" => "library.md"

docs/src/examples/overview.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,3 +16,6 @@ The examples show how to combine FMUs with machine learning ("NeuralFMU") and il
1616
- [__JuliaCon 2023: Using NeuralODEs in real life applications__](https://thummeto.github.io/FMIFlux.jl/dev/examples/juliacon_2023/): An example for a NeuralODE in a real world engineering scenario.
1717
- [__MDPI 2022: Physics-enhanced NeuralODEs in real-world applications__](https://thummeto.github.io/FMIFlux.jl/dev/examples/mdpi_2022/): An example for a NeuralODE in a real world modeling scenario (Contribution in *MDPI Electronics 2022*).
1818
- [__Modelica Conference 2021: NeuralFMUs__](https://thummeto.github.io/FMIFlux.jl/dev/examples/modelica_conference_2021/): Showing basics on how to train a NeuralFMU (Contribution for the *Modelica Conference 2021*).
19+
20+
## Archived
21+
- ...

examples/src/juliacon_2023.ipynb

Lines changed: 87 additions & 1831 deletions
Large diffs are not rendered by default.

src/batch.jl

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4,34 +4,34 @@
44
#
55

66
import FMIImport: fmi2Real, fmi2FMUstate, fmi2EventInfo, fmi2ComponentState
7-
import ChainRulesCore: ignore_derivatives
7+
import FMIImport.ChainRulesCore: ignore_derivatives
88
using DiffEqCallbacks: FunctionCallingCallback
99
using FMIImport.ForwardDiff
1010
import FMIImport: unsense
1111

12-
struct FMU2Loss{T}
12+
struct FMULoss{T}
1313
loss::T
1414
step::Integer
1515
time::Real
1616

17-
function FMU2Loss{T}(loss::T, step::Integer=0, time::Real=time()) where {T}
17+
function FMULoss{T}(loss::T, step::Integer=0, time::Real=time()) where {T}
1818
inst = new{T}(loss, step, time)
1919
return inst
2020
end
2121

22-
function FMU2Loss(loss, step::Integer=0, time::Real=time())
22+
function FMULoss(loss, step::Integer=0, time::Real=time())
2323
loss = unsense(loss)
2424
T = typeof(loss)
2525
inst = new{T}(loss, step, time)
2626
return inst
2727
end
2828
end
2929

30-
function nominalLoss(l::FMU2Loss{T}) where T <: AbstractArray
30+
function nominalLoss(l::FMULoss{T}) where T <: AbstractArray
3131
return unsense(sum(l.loss))
3232
end
3333

34-
function nominalLoss(l::FMU2Loss{T}) where T <: Real
34+
function nominalLoss(l::FMULoss{T}) where T <: Real
3535
return unsense(l.loss)
3636
end
3737

@@ -45,7 +45,7 @@ mutable struct FMU2SolutionBatchElement <: FMU2BatchElement
4545
initialState::Union{fmi2FMUstate, Nothing}
4646
initialComponentState::fmi2ComponentState
4747
initialEventInfo::Union{fmi2EventInfo, Nothing}
48-
losses::Array{<:FMU2Loss}
48+
losses::Array{<:FMULoss}
4949
step::Integer
5050

5151
saveat::Union{AbstractVector{<:Real}, Nothing}
@@ -65,7 +65,7 @@ mutable struct FMU2SolutionBatchElement <: FMU2BatchElement
6565

6666
inst.initialState = nothing
6767
inst.initialEventInfo = nothing
68-
inst.losses = Array{FMU2Loss,1}()
68+
inst.losses = Array{FMULoss,1}()
6969
inst.step = 0
7070

7171
inst.saveat = nothing
@@ -83,7 +83,7 @@ mutable struct FMU2EvaluationBatchElement <: FMU2BatchElement
8383
tStart::fmi2Real
8484
tStop::fmi2Real
8585

86-
losses::Array{<:FMU2Loss}
86+
losses::Array{<:FMULoss}
8787
step::Integer
8888

8989
saveat::Union{AbstractVector{<:Real}, Nothing}
@@ -102,7 +102,7 @@ mutable struct FMU2EvaluationBatchElement <: FMU2BatchElement
102102
inst.tStart = -Inf
103103
inst.tStop = Inf
104104

105-
inst.losses = Array{FMU2Loss,1}()
105+
inst.losses = Array{FMULoss,1}()
106106
inst.step = 0
107107

108108
inst.saveat = nothing
@@ -335,7 +335,7 @@ function loss!(batchElement::FMU2SolutionBatchElement, lossFct; logLoss::Bool=tr
335335

336336
ignore_derivatives() do
337337
if logLoss
338-
push!(batchElement.losses, FMU2Loss(loss, batchElement.step))
338+
push!(batchElement.losses, FMULoss(loss, batchElement.step))
339339
end
340340
end
341341

@@ -370,7 +370,7 @@ function loss!(batchElement::FMU2EvaluationBatchElement, lossFct; logLoss::Bool=
370370

371371
ignore_derivatives() do
372372
if logLoss
373-
push!(batchElement.losses, FMU2Loss(loss, batchElement.step))
373+
push!(batchElement.losses, FMULoss(loss, batchElement.step))
374374
end
375375
end
376376

src/flux_overload.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
#
55

66
import Flux
7-
import ChainRulesCore
7+
import FMIImport.ChainRulesCore
88
import Flux.Random: AbstractRNG
99
import Flux.LinearAlgebra: I
1010

0 commit comments

Comments
 (0)