You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## - f: The function to optimize. It should take as input a 1D Tensor of the input variables and return a scalar.
305
+
## - options: Options object (see `steepestDescentOptions` for constructing one)
306
+
## - analyticGradient: The analytic gradient of `f` taking in and returning a 1D Tensor. If not provided, a finite difference approximation will be performed instead.
307
+
##
308
+
## Returns:
309
+
## - The final solution for the parameters. Either because a (local) minimum was found or because the maximum number of iterations was reached.
## - f: The function to optimize. It should take as input a 1D Tensor of the input variables and return a scalar.
335
+
## - options: Options object (see `newtonOptions` for constructing one)
336
+
## - analyticGradient: The analytic gradient of `f` taking in and returning a 1D Tensor. If not provided, a finite difference approximation will be performed instead.
337
+
##
338
+
## Returns:
339
+
## - The final solution for the parameters. Either because a (local) minimum was found or because the maximum number of iterations was reached.
## BFGS (Broyden–Fletcher–Goldfarb–Shanno) method for optimization.
408
+
##
409
+
## Inputs:
410
+
## - f: The function to optimize. It should take as input a 1D Tensor of the input variables and return a scalar.
411
+
## - options: Options object (see `bfgsOptions` for constructing one)
412
+
## - analyticGradient: The analytic gradient of `f` taking in and returning a 1D Tensor. If not provided, a finite difference approximation will be performed instead.
413
+
##
414
+
## Returns:
415
+
## - The final solution for the parameters. Either because a (local) minimum was found or because the maximum number of iterations was reached.
345
416
# Use gemm and gemv with preallocated Tensors and setting beta = 0
346
417
var alpha = options.alpha
347
418
var x = x0.clone()
@@ -421,15 +492,24 @@ proc bfgs*[U; T: not Tensor](f: proc(x: Tensor[U]): T, x0: Tensor[U], options: O
## LBFGS (Limited-memory Broyden–Fletcher–Goldfarb–Shanno) method for optimization.
497
+
##
498
+
## Inputs:
499
+
## - f: The function to optimize. It should take as input a 1D Tensor of the input variables and return a scalar.
500
+
## - options: Options object (see `lbfgsOptions` for constructing one)
501
+
## - analyticGradient: The analytic gradient of `f` taking in and returning a 1D Tensor. If not provided, a finite difference approximation will be performed instead.
502
+
##
503
+
## Returns:
504
+
## - The final solution for the parameters. Either because a (local) minimum was found or because the maximum number of iterations was reached.
425
505
var alpha = options.alpha
426
506
var x = x0.clone()
427
507
let xLen = x.shape[0]
428
508
var fNorm =abs(f(x))
429
509
var gradient =0.01*analyticOrNumericGradient(analyticGradient, f, x0, options)
430
510
var gradNorm =vectorNorm(gradient)
431
511
var iters: int
432
-
#let m = 10 # number of past iterations to save
512
+
let m =options.algoOptions.savedIterations# number of past iterations to save
433
513
var sk_queue =initDeque[Tensor[U]](m)
434
514
var yk_queue =initDeque[Tensor[T]](m)
435
515
# the problem is the first iteration as the gradient is huge and no adjustments are made
@@ -475,7 +555,20 @@ proc lbfgs*[U; T: not Tensor](f: proc(x: Tensor[U]): T, x0: Tensor[U], m: int =
## Returns the whole covariance matrix or only the diagonal elements for the parameters in `params`.
631
+
##
632
+
## Inputs:
633
+
## - params: The parameters in a 1D Tensor that the uncertainties are wanted for.
634
+
## - fitFunc: The function used for fitting the parameters. (see `levmarq` for more)
635
+
## - yData: The measured values of the dependent variable as 1D Tensor.
636
+
## - xData: The values of the independent variable as 1D Tensor.
637
+
## - yError: The uncertainties of the `yData` as 1D Tensor. Ideally these should be the 1σ standard deviation.
638
+
## - returnFullConv: If true, the full covariance matrix will be returned as a 2D Tensor, else only the diagonal elements will be returned as a 1D Tensor.
639
+
##
640
+
## Returns:
641
+
##
642
+
## The uncertainties of the parameters in the form of a covariance matrix (or only the diagonal elements).
643
+
##
644
+
## Note: it is the covariance that is returned, so if you want the standard deviation you have to
0 commit comments