Skip to content

Commit 4de831d

Browse files
committed
Merge branch 'main' of github.com:JuliaStats/MixedModels.jl into pa/regression-formulae
2 parents 076535b + 6408a34 commit 4de831d

40 files changed

+117
-174
lines changed

.github/workflows/current.yml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,12 +19,13 @@ jobs:
1919
ci:
2020
runs-on: ${{ matrix.os }}
2121
strategy:
22+
fail-fast: false
2223
matrix:
2324
julia-version: [1]
2425
# julia-arch: [x64]
2526
os:
26-
- ubuntu-22.04
27-
- macOS-14 # apple silicon!
27+
- ubuntu-24.04
28+
- macOS-15 # apple silicon!
2829
steps:
2930
- uses: actions/checkout@v5
3031
- uses: julia-actions/setup-julia@v2

NEWS.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,13 @@ MixedModels v5.0.0 Release Notes
22
==============================
33
- Optimization is now performed _without constraints_. In a post-fitting step, the Cholesky factor is canonicalized to have non-negative diagonal elements. [#840]
44
- The default optimizer has changed to NLopt's implementation of NEWUOA where possible. NLopt's implementation fails on 1-dimensional problems, so in the case of a single, scalar random effect, BOBYQA is used instead. In the future, the default optimizer backend will likely change to PRIMA and NLopt support will be moved to an extension. Blocking this change in backend is an issue with PRIMA.jl when running in VSCode's built-in REPL on Linux. [#840]
5-
- [BREAKING] Support for constrained optimization has been completely removed, i.e. the field `lowerbd` has been removed from `OptSummary`.
5+
- [BREAKING] Support for constrained optimization has been completely removed, i.e. the field `lowerbd` has been removed from `OptSummary`. [#849]
66
- [BREAKING] A fitlog is always kept -- the deprecated keyword argument `thin` has been removed as has the `fitlog` keyword argument. [#850]
77
- The fitlog is now stored as Tables.jl-compatible column table. [#850]
88
- Internal code around the default optimizer has been restructured. In particular, the NLopt backend has been moved to a submodule, which will make it easier to move it to an extension if we promote another backend to the default. [#853]
99
- Internal code around optimization in profiling has been restructuring so that fitting done during calls to `profile` respect the `backend` and `optimizer` settings. [#853]
1010
- The `prfit!` convenience function has been removed. [#853]
11+
- The `dataset` and `datasets` functions have been removed. They are now housed in `MixedModelsDatasets`.[#854]
1112
- The local implementation of `fulldummy` and the nesting syntax has been removed and a dependency on RegressionFormulae.jl for their implementation has been added. [#855]
1213

1314
MixedModels v4.38.0 Release Notes
@@ -670,3 +671,8 @@ Package dependencies
670671
[#840]: https://github.com/JuliaStats/MixedModels.jl/issues/840
671672
[#841]: https://github.com/JuliaStats/MixedModels.jl/issues/841
672673
[#842]: https://github.com/JuliaStats/MixedModels.jl/issues/842
674+
[#849]: https://github.com/JuliaStats/MixedModels.jl/issues/849
675+
[#850]: https://github.com/JuliaStats/MixedModels.jl/issues/850
676+
[#853]: https://github.com/JuliaStats/MixedModels.jl/issues/853
677+
[#854]: https://github.com/JuliaStats/MixedModels.jl/issues/854
678+
[#855]: https://github.com/JuliaStats/MixedModels.jl/issues/855

benchmark/benchmarks.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
using BenchmarkTools, MixedModels
2-
using MixedModels: dataset
2+
using MixedModelsDatasets: dataset
33

44
const SUITE = BenchmarkGroup()
55

docs/Project.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
99
FreqTables = "da1fdf0e-e0ff-5433-a45f-9bb5ff651cb1"
1010
Gadfly = "c91e804a-d5a3-530f-b6f0-dfbca275c004"
1111
MixedModels = "ff71e718-51f3-5ec2-a782-8ffcbfa3c316"
12+
MixedModelsDatasets = "7e9fb7ac-9f67-43bf-b2c8-96ba0796cbb6"
1213
ProgressMeter = "92933f4c-e287-5a05-a399-4b506db050ca"
1314
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
1415
StatsAPI = "82ae8749-77ed-4fe6-ae5f-f523153014b0"

docs/src/GaussHermite.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ The definition of `MixedModels.GHnorm` is similar to the `gausshermitenorm` func
7373
GHnorm
7474
```
7575
```@example Main
76-
using MixedModels
76+
using MixedModels, MixedModelsDatasets
7777
GHnorm(3)
7878
```
7979

@@ -100,7 +100,7 @@ Several covariates were recorded including the woman's age (centered at the mean
100100
The version of the data used here is that used in review of multilevel modeling software conducted by the Center for Multilevel Modelling, currently at University of Bristol (http://www.bristol.ac.uk/cmm/learning/mmsoftware/data-rev.html).
101101
These data are available as the `:contra` dataset.
102102
```@example Main
103-
contra = DataFrame(MixedModels.dataset(:contra))
103+
contra = DataFrame(MixedModelsDatasets.dataset(:contra))
104104
describe(contra)
105105
```
106106

docs/src/bootstrap.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,12 +27,12 @@ package for [`R`](https://www.r-project.org) is fit by
2727
```@example Main
2828
using DataFrames
2929
using Gadfly # plotting package
30-
using MixedModels
30+
using MixedModels, MixedModelsDatasets
3131
using Random
3232
```
3333

3434
```@example Main
35-
dyestuff = MixedModels.dataset(:dyestuff)
35+
dyestuff = MixedModelsDatasets.dataset(:dyestuff)
3636
m1 = fit(MixedModel, @formula(yield ~ 1 + (1 | batch)), dyestuff)
3737
```
3838

@@ -88,7 +88,7 @@ However, it is not as straightforward to detect singularity in vector-valued ran
8888

8989
For example, if we bootstrap a model fit to the `sleepstudy` data
9090
```@example Main
91-
sleepstudy = MixedModels.dataset(:sleepstudy)
91+
sleepstudy = MixedModelsDatasets.dataset(:sleepstudy)
9292
contrasts = Dict(:subj => Grouping())
9393
m2 = let f = @formula reaction ~ 1+days+(1+days|subj)
9494
fit(MixedModel, f, sleepstudy; contrasts)

docs/src/constructors.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ using DisplayAs
1414
```
1515

1616
```@example Main
17-
using DataFrames, MixedModels, StatsModels
18-
dyestuff = MixedModels.dataset(:dyestuff)
17+
using DataFrames, MixedModels, MixedModelsDatasets, StatsModels
18+
dyestuff = MixedModelsDatasets.dataset(:dyestuff)
1919
```
2020

2121
```@example Main
@@ -71,7 +71,7 @@ Notice that both are equivalent.
7171

7272
```@example Main
7373
using BenchmarkTools
74-
dyestuff2 = MixedModels.dataset(:dyestuff2)
74+
dyestuff2 = MixedModelsDatasets.dataset(:dyestuff2)
7575
@benchmark fit(MixedModel, $fm, $dyestuff2)
7676
```
7777

@@ -110,7 +110,7 @@ It corresponds to a shift in the intercept for each level of the grouping factor
110110
The *sleepstudy* data are observations of reaction time, `reaction`, on several subjects, `subj`, after 0 to 9 days of sleep deprivation, `days`.
111111
A model with random intercepts and random slopes for each subject, allowing for within-subject correlation of the slope and intercept, is fit as
112112
```@example Main
113-
sleepstudy = MixedModels.dataset(:sleepstudy)
113+
sleepstudy = MixedModelsDatasets.dataset(:sleepstudy)
114114
fm2 = fit(MixedModel, @formula(reaction ~ 1 + days + (1 + days|subj)), sleepstudy)
115115
DisplayAs.Text(ans) # hide
116116
```
@@ -120,14 +120,14 @@ DisplayAs.Text(ans) # hide
120120
A model for the *Penicillin* data incorporates random effects for the plate, and for the sample.
121121
As every sample is used on every plate these two factors are *crossed*.
122122
```@example Main
123-
penicillin = MixedModels.dataset(:penicillin)
123+
penicillin = MixedModelsDatasets.dataset(:penicillin)
124124
fm3 = fit(MixedModel, @formula(diameter ~ 1 + (1|plate) + (1|sample)), penicillin)
125125
DisplayAs.Text(ans) # hide
126126
```
127127

128128
In contrast, the `cask` grouping factor is *nested* within the `batch` grouping factor in the *Pastes* data.
129129
```@example Main
130-
pastes = DataFrame(MixedModels.dataset(:pastes))
130+
pastes = DataFrame(MixedModelsDatasets.dataset(:pastes))
131131
describe(pastes)
132132
```
133133
This can be expressed using the solidus (the "`/`" character) to separate grouping factors, read "`cask` nested within `batch`":
@@ -147,7 +147,7 @@ In observational studies it is common to encounter *partially crossed* grouping
147147
For example, the *InstEval* data are course evaluations by students, `s`, of instructors, `d`.
148148
Additional covariates include the academic department, `dept`, in which the course was given and `service`, whether or not it was a service course.
149149
```@example Main
150-
insteval = MixedModels.dataset(:insteval)
150+
insteval = MixedModelsDatasets.dataset(:insteval)
151151
fm5 = fit(MixedModel, @formula(y ~ 1 + service * dept + (1|s) + (1|d)), insteval)
152152
DisplayAs.Text(ans) # hide
153153
```
@@ -219,7 +219,7 @@ To create a GLMM representation, the distribution family for the response, and p
219219
You can either use `fit(MixedModel, ...)` or `glmm(...)` to fit the model. For instance:
220220

221221
```@example Main
222-
verbagg = MixedModels.dataset(:verbagg)
222+
verbagg = MixedModelsDatasets.dataset(:verbagg)
223223
verbaggform = @formula(r2 ~ 1 + anger + gender + btype + situ + mode + (1|subj) + (1|item));
224224
gm1 = fit(MixedModel, verbaggform, verbagg, Bernoulli())
225225
DisplayAs.Text(ans) # hide
@@ -255,7 +255,7 @@ The optional argument `nAGQ=k` causes evaluation of the deviance function to use
255255
adaptive Gauss-Hermite quadrature rule.
256256
This method only applies to models with a single, simple, scalar random-effects term, such as
257257
```@example Main
258-
contraception = MixedModels.dataset(:contra)
258+
contraception = MixedModelsDatasets.dataset(:contra)
259259
contraform = @formula(use ~ 1 + age + abs2(age) + livch + urban + (1|dist));
260260
bernoulli = Bernoulli()
261261
deviances = Dict{Symbol,Float64}()
@@ -508,7 +508,7 @@ sum(leverage(fm2))
508508

509509
When a model converges to a singular covariance, such as
510510
```@example Main
511-
fm3 = fit(MixedModel, @formula(yield ~ 1+(1|batch)), MixedModels.dataset(:dyestuff2))
511+
fm3 = fit(MixedModel, @formula(yield ~ 1+(1|batch)), MixedModelsDatasets.dataset(:dyestuff2))
512512
DisplayAs.Text(ans) # hide
513513
```
514514
the effective degrees of freedom is the lower bound.
@@ -519,7 +519,7 @@ sum(leverage(fm3))
519519
Models for which the estimates of the variances of the random effects are large relative to the residual variance have effective degrees of freedom close to the upper bound.
520520
```@example Main
521521
fm4 = fit(MixedModel, @formula(diameter ~ 1+(1|plate)+(1|sample)),
522-
MixedModels.dataset(:penicillin))
522+
MixedModelsDatasets.dataset(:penicillin))
523523
DisplayAs.Text(ans) # hide
524524
```
525525
```@example Main
@@ -529,7 +529,7 @@ sum(leverage(fm4))
529529
Also, a model fit by the REML criterion generally has larger estimates of the variance components and hence a larger effective degrees of freedom.
530530
```@example Main
531531
fm4r = fit(MixedModel, @formula(diameter ~ 1+(1|plate)+(1|sample)),
532-
MixedModels.dataset(:penicillin), REML=true)
532+
MixedModelsDatasets.dataset(:penicillin), REML=true)
533533
DisplayAs.Text(ans) # hide
534534
```
535535
```@example Main

docs/src/derivatives.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@ ForwardDiff.hessian(::LinearMixedModel{T}, ::Vector{T}) where {T}
1515
### Exact zero at optimum for trivial models
1616

1717
```@example Derivatives
18-
using MixedModels, ForwardDiff
18+
using MixedModels, MixedModelsDatasets, ForwardDiff
1919
using DisplayAs # hide
20-
fm1 = lmm(@formula(yield ~ 1 + (1|batch)), MixedModels.dataset(:dyestuff2))
20+
fm1 = lmm(@formula(yield ~ 1 + (1|batch)), MixedModelsDatasets.dataset(:dyestuff2))
2121
DisplayAs.Text(ans) # hide
2222
```
2323

@@ -32,7 +32,7 @@ ForwardDiff.hessian(fm1)
3232
### Approximate zero at optimum for non trivial models
3333

3434
```@example Derivatives
35-
fm2 = lmm(@formula(reaction ~ 1 + days + (1+days|subj)), MixedModels.dataset(:sleepstudy))
35+
fm2 = lmm(@formula(reaction ~ 1 + days + (1+days|subj)), MixedModelsDatasets.dataset(:sleepstudy))
3636
DisplayAs.Text(ans) # hide
3737
```
3838

docs/src/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,8 @@ You can fit a model using a `lmer`-style model formula using `@formula` and a da
1616
Here is a short example of how to fit a linear mixed-effects modeling using the `dyestuff` dataset:
1717

1818
```@example Main
19-
using DataFrames, MixedModels # load packages
20-
dyestuff = MixedModels.dataset(:dyestuff); # load dataset
19+
using DataFrames, MixedModels, MixedModelsDatasets # load packages
20+
dyestuff = MixedModelsDatasets.dataset(:dyestuff); # load dataset
2121
2222
lmod = lmm(@formula(yield ~ 1 + (1|batch)), dyestuff) # fit the model!
2323
DisplayAs.Text(ans) # hide
@@ -28,7 +28,7 @@ A quick example of generalized linear model using the `verbagg` dataset:
2828

2929
```@example Main
3030
using DataFrames, MixedModels # load packages
31-
verbagg = MixedModels.dataset(:verbagg); # load dataset
31+
verbagg = MixedModelsDatasets.dataset(:verbagg); # load dataset
3232
3333
frm = @formula(r2 ~ 1 + anger + gender + btype + situ + mode + (1|subj) + (1|item));
3434
bernmod = glmm(frm, verbagg, Bernoulli()) # fit the model!

docs/src/mime.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Packages like `IJulia` and `Documenter` can often detect the presence of these d
1212

1313

1414
```@example Main
15-
using MixedModels
15+
using MixedModels, MixedModelsDatasets
1616
form = @formula(rt_trunc ~ 1 + spkr * prec * load +
1717
(1 + load | item) +
1818
(1 + spkr + prec + load | subj))
@@ -21,7 +21,7 @@ contr = Dict(:spkr => EffectsCoding(),
2121
:load => EffectsCoding(),
2222
:item => Grouping(),
2323
:subj => Grouping())
24-
kbm = fit(MixedModel, form, MixedModels.dataset(:kb07); contrasts=contr)
24+
kbm = fit(MixedModel, form, MixedModelsDatasets.dataset(:kb07); contrasts=contr)
2525
```
2626

2727
Note that the display here is more succinct than the standard REPL display:
@@ -52,8 +52,8 @@ kbm.optsum
5252
```
5353

5454
```@example Main
55-
m0 = fit(MixedModel, @formula(reaction ~ 1 + (1|subj)), MixedModels.dataset(:sleepstudy))
56-
m1 = fit(MixedModel, @formula(reaction ~ 1 + days + (1+days|subj)), MixedModels.dataset(:sleepstudy))
55+
m0 = fit(MixedModel, @formula(reaction ~ 1 + (1|subj)), MixedModelsDatasets.dataset(:sleepstudy))
56+
m1 = fit(MixedModel, @formula(reaction ~ 1 + days + (1+days|subj)), MixedModelsDatasets.dataset(:sleepstudy))
5757
MixedModels.likelihoodratiotest(m0,m1)
5858
```
5959

0 commit comments

Comments
 (0)