Last updated on 2024-11-27 09:49:53 CET.
Package | ERROR | NOTE | OK |
---|---|---|---|
easystats | 13 | ||
esc | 13 | ||
ggeffects | 4 | 9 | |
insight | 13 | ||
parameters | 4 | 9 | |
performance | 13 | ||
sjlabelled | 13 | ||
sjmisc | 3 | 10 | |
sjPlot | 13 | ||
sjstats | 13 |
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: ERROR: 4, OK: 9
Version: 1.7.2
Check: tests
Result: ERROR
Running ‘testthat.R’ [131s/128s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(ggeffects)
> test_check("ggeffects")
Model has log transformed response. Predictions are on transformed
scale.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
(Intercept) tensionM tensionH
36.38889 -10.00000 -14.72222
Not all rows are shown in the output. Use `print(..., n = Inf)` to show
all rows.
Not all rows are shown in the output. Use `print(..., n = Inf)` to show
all rows.
NOTE: Results may be misleading due to involvement in interactions
NOTE: Results may be misleading due to involvement in interactions
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Re-fitting to get Hessian
Re-fitting to get Hessian
Could not compute variance-covariance matrix of predictions. No
confidence intervals are returned.
Model contains splines or polynomial terms. Consider using `terms="mined
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="cover
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="mined
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="cover
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Can't compute adjusted predictions, `effects::Effect()` returned an error.
Reason: Invalid operation on a survival time
You may try `ggpredict()` or `ggemmeans()`.
Can't compute adjusted predictions, `effects::Effect()` returned an error.
Reason: non-conformable arguments
You may try `ggpredict()` or `ggemmeans()`.
[ FAIL 6 | WARN 1 | SKIP 63 | PASS 653 ]
══ Skipped tests (63) ══════════════════════════════════════════════════════════
• On CRAN (55): 'test-MCMCglmm.R:1:1', 'test-MixMod.R:1:1',
'test-avg_predictions.R:24:3', 'test-avg_predictions.R:79:5',
'test-backtransform_response.R:76:5', 'test-bias_correction.R:1:1',
'test-brms-categ-cum.R:1:1', 'test-brms-monotonic.R:1:1',
'test-brms-ppd.R:1:1', 'test-brms-trial.R:1:1', 'test-clean_vars.R:1:1',
'test-clm.R:1:1', 'test-clm2.R:1:1', 'test-clmm.R:1:1',
'test-correct_se_sorting.R:1:1', 'test-decimals.R:1:1', 'test-fixest.R:1:1',
'test-focal_only_random.R:1:1', 'test-format.R:1:1', 'test-gamlss.R:1:1',
'test-gamm4.R:1:1', 'test-glmer.R:2:1', 'test-glmmTMB.R:1:1',
'test-interval_re.R:1:1', 'test-ivreg.R:1:1',
'test-johnson_neyman_numcat.R:1:1', 'test-list_terms.R:36:3',
'test-lmer.R:1:1', 'test-mgcv.R:1:1', 'test-plot-ordinal-latent.R:1:1',
'test-plot.R:69:1', 'test-polr.R:21:7', 'test-polr.R:60:7',
'test-pool_comparisons.R:1:1', 'test-print.R:1:1', 'test-print_digits.R:1:1',
'test-print_md.R:1:1', 'test-print_zero_inflation.R:1:1',
'test-resid_over_grid.R:33:5', 'test-rstanarm-ppd.R:1:1',
'test-rstanarm.R:1:1', 'test-sdmTMB.R:1:1', 'test-simulate.R:1:1',
'test-test_predictions-margin.R:1:1', 'test-test_predictions-mixed.R:1:1',
'test-test_predictions_emmeans.R:133:3',
'test-test_predictions_emmeans.R:168:3',
'test-test_predictions_ggeffects.R:140:3',
'test-test_predictions_ggeffects.R:172:3',
'test-test_predictions_ggeffects.R:181:3',
'test-test_predictions_ggeffects.R:224:5', 'test-vcov.R:1:1',
'test-vglm.R:1:1', 'test-zeroinfl.R:27:3', 'test-zi_prob.R:1:1'
• On Linux (5): 'test-ordinal.R:1:1', 'test-parsnip.R:94:3',
'test-print_subsets.R:1:1', 'test-print_test_predictions-ordinal.R:1:1',
'test-print_test_predictions.R:1:1'
• empty test (3): 'test-plot.R:8:1', 'test-polr.R:136:5', 'test-polr.R:142:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-test_predictions.R:157:3'): test_predictions, categorical, pairwise ──
out$Contrast (`actual`) not equal to c(...) (`expected`).
actual | expected
[1] -0.2051 - 0.4199 [1]
[2] 0.0666 - -0.2051 [2]
[3] 0.4199 - -0.1528 [3]
[4] -0.1528 - 0.0666 [4]
[5] 0.1187 | 0.1187 [5]
[6] 0.2718 - -0.6251 [6]
[7] 0.6251 - -0.5727 [7]
[8] 0.0524 - -0.3533 [8]
[9] 0.3239 - -0.3012 [9]
[10] 0.3533 - 0.0524 [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:166:3'): test_predictions, categorical, pairwise ──
out$groups (`actual`) not identical to c(...) (`expected`).
actual | expected
[1] "control-control" - "control-treatment" [1]
[2] "control-control" | "control-control" [2]
[3] "control-treatment" | "control-treatment" [3]
[4] "control-treatment" - "control-control" [4]
[5] "control-treatment" | "control-treatment" [5]
[6] "control-control" - "treatment-control" [6]
[7] "control-treatment" - "treatment-treatment" [7]
[8] "control-treatment" - "treatment-control" [8]
[9] "control-treatment" - "treatment-treatment" [9]
[10] "control-treatment" | "control-treatment" [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:176:3'): test_predictions, categorical, pairwise ──
out$episode (`actual`) not identical to c(...) (`expected`).
actual | expected
[1] "1-2" - "1-1" [1]
[2] "1-3" - "1-2" [2]
[3] "1-1" - "1-2" [3]
[4] "1-2" - "1-3" [4]
[5] "1-3" | "1-3" [5]
[6] "2-3" - "1-2" [6]
[7] "2-1" - "1-2" [7]
[8] "2-2" - "1-3" [8]
[9] "2-3" - "1-3" [9]
[10] "3-1" - "2-2" [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:311:3'): test_predictions, works with glmmTMB and w/o vcov ──
out1$Contrast (`actual`) not equal to c(0.06846, -0.87857, -0.79452, 0.30375, 1.48621) (`expected`).
`actual`: 0.304 0.827 -0.879 -0.228 -0.101
`expected`: 0.068 -0.879 -0.795 0.304 1.486
── Failure ('test-test_predictions.R:312:3'): test_predictions, works with glmmTMB and w/o vcov ──
out1$conf.low (`actual`) not equal to c(0.06846, -0.87857, -0.79452, 0.30375, 1.48621) (`expected`).
`actual`: 0.304 0.827 -0.879 -0.228 NA
`expected`: 0.068 -0.879 -0.795 0.304 1.486
── Failure ('test-test_predictions_emmeans.R:119:3'): test_predictions, engine emmeans, glm binomial ──
out1$Contrast (`actual`) not equal to out2$Contrast[4] (`expected`).
`actual`: -0.152
`expected`: -0.068
[ FAIL 6 | WARN 1 | SKIP 63 | PASS 653 ]
Deleting unused snapshots:
• backtransform_response/show-data-back-transformed-true.svg
• brms-monotonic/plot-brms-monotonic.svg
• plot-ordinal-latent/clm-latent-false.svg
• plot-ordinal-latent/clm-latent-true.svg
• plot-ordinal-latent/polr-latent-false.svg
• plot-ordinal-latent/polr-latent-true.svg
• plot/collapse-random-effects-works-again.svg
• plot/colored-data-points-with-special-focal-terms.svg
• plot/simple-plot-bw.svg
• plot/simple-plot-categorical-bw.svg
• plot/simple-plot-categorical-ci-bands-as-dots.svg
• plot/simple-plot-categorical-grey-scale.svg
• plot/simple-plot-categorical-no-ci.svg
• plot/simple-plot-categorical-show-data-jitter.svg
• plot/simple-plot-categorical-show-data.svg
• plot/simple-plot-categorical.svg
• plot/simple-plot-ci-bands-as-dots.svg
• plot/simple-plot-grey-scale.svg
• plot/simple-plot-no-ci.svg
• plot/simple-plot-show-data-jitter.svg
• plot/simple-plot-show-data.svg
• plot/simple-plot.svg
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc
Version: 1.7.2
Check: tests
Result: ERROR
Running ‘testthat.R’ [6m/17m]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(ggeffects)
> test_check("ggeffects")
Model has log transformed response. Predictions are on transformed
scale.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
(Intercept) tensionM tensionH
36.38889 -10.00000 -14.72222
Not all rows are shown in the output. Use `print(..., n = Inf)` to show
all rows.
Not all rows are shown in the output. Use `print(..., n = Inf)` to show
all rows.
NOTE: Results may be misleading due to involvement in interactions
NOTE: Results may be misleading due to involvement in interactions
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Re-fitting to get Hessian
Re-fitting to get Hessian
Could not compute variance-covariance matrix of predictions. No
confidence intervals are returned.
Model contains splines or polynomial terms. Consider using `terms="mined
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="cover
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="mined
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="cover
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Can't compute adjusted predictions, `effects::Effect()` returned an error.
Reason: Invalid operation on a survival time
You may try `ggpredict()` or `ggemmeans()`.
Can't compute adjusted predictions, `effects::Effect()` returned an error.
Reason: non-conformable arguments
You may try `ggpredict()` or `ggemmeans()`.
[ FAIL 6 | WARN 14 | SKIP 63 | PASS 653 ]
══ Skipped tests (63) ══════════════════════════════════════════════════════════
• On CRAN (55): 'test-MCMCglmm.R:1:1', 'test-MixMod.R:1:1',
'test-avg_predictions.R:24:3', 'test-avg_predictions.R:79:5',
'test-backtransform_response.R:76:5', 'test-bias_correction.R:1:1',
'test-brms-categ-cum.R:1:1', 'test-brms-monotonic.R:1:1',
'test-brms-ppd.R:1:1', 'test-brms-trial.R:1:1', 'test-clean_vars.R:1:1',
'test-clm.R:1:1', 'test-clm2.R:1:1', 'test-clmm.R:1:1',
'test-correct_se_sorting.R:1:1', 'test-decimals.R:1:1', 'test-fixest.R:1:1',
'test-focal_only_random.R:1:1', 'test-format.R:1:1', 'test-gamlss.R:1:1',
'test-gamm4.R:1:1', 'test-glmer.R:2:1', 'test-glmmTMB.R:1:1',
'test-interval_re.R:1:1', 'test-ivreg.R:1:1',
'test-johnson_neyman_numcat.R:1:1', 'test-list_terms.R:36:3',
'test-lmer.R:1:1', 'test-mgcv.R:1:1', 'test-plot-ordinal-latent.R:1:1',
'test-plot.R:69:1', 'test-polr.R:21:7', 'test-polr.R:60:7',
'test-pool_comparisons.R:1:1', 'test-print.R:1:1', 'test-print_digits.R:1:1',
'test-print_md.R:1:1', 'test-print_zero_inflation.R:1:1',
'test-resid_over_grid.R:33:5', 'test-rstanarm-ppd.R:1:1',
'test-rstanarm.R:1:1', 'test-sdmTMB.R:1:1', 'test-simulate.R:1:1',
'test-test_predictions-margin.R:1:1', 'test-test_predictions-mixed.R:1:1',
'test-test_predictions_emmeans.R:133:3',
'test-test_predictions_emmeans.R:168:3',
'test-test_predictions_ggeffects.R:140:3',
'test-test_predictions_ggeffects.R:172:3',
'test-test_predictions_ggeffects.R:181:3',
'test-test_predictions_ggeffects.R:224:5', 'test-vcov.R:1:1',
'test-vglm.R:1:1', 'test-zeroinfl.R:27:3', 'test-zi_prob.R:1:1'
• On Linux (5): 'test-ordinal.R:1:1', 'test-parsnip.R:94:3',
'test-print_subsets.R:1:1', 'test-print_test_predictions-ordinal.R:1:1',
'test-print_test_predictions.R:1:1'
• empty test (3): 'test-plot.R:8:1', 'test-polr.R:136:5', 'test-polr.R:142:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-test_predictions.R:157:3'): test_predictions, categorical, pairwise ──
out$Contrast (`actual`) not equal to c(...) (`expected`).
actual | expected
[1] -0.2051 - 0.4199 [1]
[2] 0.0666 - -0.2051 [2]
[3] 0.4199 - -0.1528 [3]
[4] -0.1528 - 0.0666 [4]
[5] 0.1187 | 0.1187 [5]
[6] 0.2718 - -0.6251 [6]
[7] 0.6251 - -0.5727 [7]
[8] 0.0524 - -0.3533 [8]
[9] 0.3239 - -0.3012 [9]
[10] 0.3533 - 0.0524 [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:166:3'): test_predictions, categorical, pairwise ──
out$groups (`actual`) not identical to c(...) (`expected`).
actual | expected
[1] "control-control" - "control-treatment" [1]
[2] "control-control" | "control-control" [2]
[3] "control-treatment" | "control-treatment" [3]
[4] "control-treatment" - "control-control" [4]
[5] "control-treatment" | "control-treatment" [5]
[6] "control-control" - "treatment-control" [6]
[7] "control-treatment" - "treatment-treatment" [7]
[8] "control-treatment" - "treatment-control" [8]
[9] "control-treatment" - "treatment-treatment" [9]
[10] "control-treatment" | "control-treatment" [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:176:3'): test_predictions, categorical, pairwise ──
out$episode (`actual`) not identical to c(...) (`expected`).
actual | expected
[1] "1-2" - "1-1" [1]
[2] "1-3" - "1-2" [2]
[3] "1-1" - "1-2" [3]
[4] "1-2" - "1-3" [4]
[5] "1-3" | "1-3" [5]
[6] "2-3" - "1-2" [6]
[7] "2-1" - "1-2" [7]
[8] "2-2" - "1-3" [8]
[9] "2-3" - "1-3" [9]
[10] "3-1" - "2-2" [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:311:3'): test_predictions, works with glmmTMB and w/o vcov ──
out1$Contrast (`actual`) not equal to c(0.06846, -0.87857, -0.79452, 0.30375, 1.48621) (`expected`).
`actual`: 0.304 0.827 -0.879 -0.228 -0.101
`expected`: 0.068 -0.879 -0.795 0.304 1.486
── Failure ('test-test_predictions.R:312:3'): test_predictions, works with glmmTMB and w/o vcov ──
out1$conf.low (`actual`) not equal to c(0.06846, -0.87857, -0.79452, 0.30375, 1.48621) (`expected`).
`actual`: 0.304 0.827 -0.879 -0.228 NA
`expected`: 0.068 -0.879 -0.795 0.304 1.486
── Failure ('test-test_predictions_emmeans.R:119:3'): test_predictions, engine emmeans, glm binomial ──
out1$Contrast (`actual`) not equal to out2$Contrast[4] (`expected`).
`actual`: -0.152
`expected`: -0.068
[ FAIL 6 | WARN 14 | SKIP 63 | PASS 653 ]
Deleting unused snapshots:
• backtransform_response/show-data-back-transformed-true.svg
• brms-monotonic/plot-brms-monotonic.svg
• plot-ordinal-latent/clm-latent-false.svg
• plot-ordinal-latent/clm-latent-true.svg
• plot-ordinal-latent/polr-latent-false.svg
• plot-ordinal-latent/polr-latent-true.svg
• plot/collapse-random-effects-works-again.svg
• plot/colored-data-points-with-special-focal-terms.svg
• plot/simple-plot-bw.svg
• plot/simple-plot-categorical-bw.svg
• plot/simple-plot-categorical-ci-bands-as-dots.svg
• plot/simple-plot-categorical-grey-scale.svg
• plot/simple-plot-categorical-no-ci.svg
• plot/simple-plot-categorical-show-data-jitter.svg
• plot/simple-plot-categorical-show-data.svg
• plot/simple-plot-categorical.svg
• plot/simple-plot-ci-bands-as-dots.svg
• plot/simple-plot-grey-scale.svg
• plot/simple-plot-no-ci.svg
• plot/simple-plot-show-data-jitter.svg
• plot/simple-plot-show-data.svg
• plot/simple-plot.svg
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang
Version: 1.7.2
Check: tests
Result: ERROR
Running ‘testthat.R’ [6m/45m]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(ggeffects)
> test_check("ggeffects")
Model has log transformed response. Predictions are on transformed
scale.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
(Intercept) tensionM tensionH
36.38889 -10.00000 -14.72222
Not all rows are shown in the output. Use `print(..., n = Inf)` to show
all rows.
Not all rows are shown in the output. Use `print(..., n = Inf)` to show
all rows.
NOTE: Results may be misleading due to involvement in interactions
NOTE: Results may be misleading due to involvement in interactions
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Re-fitting to get Hessian
Re-fitting to get Hessian
Could not compute variance-covariance matrix of predictions. No
confidence intervals are returned.
Model contains splines or polynomial terms. Consider using `terms="mined
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="cover
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="mined
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="cover
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Can't compute adjusted predictions, `effects::Effect()` returned an error.
Reason: Invalid operation on a survival time
You may try `ggpredict()` or `ggemmeans()`.
Can't compute adjusted predictions, `effects::Effect()` returned an error.
Reason: non-conformable arguments
You may try `ggpredict()` or `ggemmeans()`.
[ FAIL 6 | WARN 14 | SKIP 63 | PASS 653 ]
══ Skipped tests (63) ══════════════════════════════════════════════════════════
• On CRAN (55): 'test-MCMCglmm.R:1:1', 'test-MixMod.R:1:1',
'test-avg_predictions.R:24:3', 'test-avg_predictions.R:79:5',
'test-backtransform_response.R:76:5', 'test-bias_correction.R:1:1',
'test-brms-categ-cum.R:1:1', 'test-brms-monotonic.R:1:1',
'test-brms-ppd.R:1:1', 'test-brms-trial.R:1:1', 'test-clean_vars.R:1:1',
'test-clm.R:1:1', 'test-clm2.R:1:1', 'test-clmm.R:1:1',
'test-correct_se_sorting.R:1:1', 'test-decimals.R:1:1', 'test-fixest.R:1:1',
'test-focal_only_random.R:1:1', 'test-format.R:1:1', 'test-gamlss.R:1:1',
'test-gamm4.R:1:1', 'test-glmer.R:2:1', 'test-glmmTMB.R:1:1',
'test-interval_re.R:1:1', 'test-ivreg.R:1:1',
'test-johnson_neyman_numcat.R:1:1', 'test-list_terms.R:36:3',
'test-lmer.R:1:1', 'test-mgcv.R:1:1', 'test-plot-ordinal-latent.R:1:1',
'test-plot.R:69:1', 'test-polr.R:21:7', 'test-polr.R:60:7',
'test-pool_comparisons.R:1:1', 'test-print.R:1:1', 'test-print_digits.R:1:1',
'test-print_md.R:1:1', 'test-print_zero_inflation.R:1:1',
'test-resid_over_grid.R:33:5', 'test-rstanarm-ppd.R:1:1',
'test-rstanarm.R:1:1', 'test-sdmTMB.R:1:1', 'test-simulate.R:1:1',
'test-test_predictions-margin.R:1:1', 'test-test_predictions-mixed.R:1:1',
'test-test_predictions_emmeans.R:133:3',
'test-test_predictions_emmeans.R:168:3',
'test-test_predictions_ggeffects.R:140:3',
'test-test_predictions_ggeffects.R:172:3',
'test-test_predictions_ggeffects.R:181:3',
'test-test_predictions_ggeffects.R:224:5', 'test-vcov.R:1:1',
'test-vglm.R:1:1', 'test-zeroinfl.R:27:3', 'test-zi_prob.R:1:1'
• On Linux (5): 'test-ordinal.R:1:1', 'test-parsnip.R:94:3',
'test-print_subsets.R:1:1', 'test-print_test_predictions-ordinal.R:1:1',
'test-print_test_predictions.R:1:1'
• empty test (3): 'test-plot.R:8:1', 'test-polr.R:136:5', 'test-polr.R:142:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-test_predictions.R:157:3'): test_predictions, categorical, pairwise ──
out$Contrast (`actual`) not equal to c(...) (`expected`).
actual | expected
[1] -0.2051 - 0.4199 [1]
[2] 0.0666 - -0.2051 [2]
[3] 0.4199 - -0.1528 [3]
[4] -0.1528 - 0.0666 [4]
[5] 0.1187 | 0.1187 [5]
[6] 0.2718 - -0.6251 [6]
[7] 0.6251 - -0.5727 [7]
[8] 0.0524 - -0.3533 [8]
[9] 0.3239 - -0.3012 [9]
[10] 0.3533 - 0.0524 [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:166:3'): test_predictions, categorical, pairwise ──
out$groups (`actual`) not identical to c(...) (`expected`).
actual | expected
[1] "control-control" - "control-treatment" [1]
[2] "control-control" | "control-control" [2]
[3] "control-treatment" | "control-treatment" [3]
[4] "control-treatment" - "control-control" [4]
[5] "control-treatment" | "control-treatment" [5]
[6] "control-control" - "treatment-control" [6]
[7] "control-treatment" - "treatment-treatment" [7]
[8] "control-treatment" - "treatment-control" [8]
[9] "control-treatment" - "treatment-treatment" [9]
[10] "control-treatment" | "control-treatment" [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:176:3'): test_predictions, categorical, pairwise ──
out$episode (`actual`) not identical to c(...) (`expected`).
actual | expected
[1] "1-2" - "1-1" [1]
[2] "1-3" - "1-2" [2]
[3] "1-1" - "1-2" [3]
[4] "1-2" - "1-3" [4]
[5] "1-3" | "1-3" [5]
[6] "2-3" - "1-2" [6]
[7] "2-1" - "1-2" [7]
[8] "2-2" - "1-3" [8]
[9] "2-3" - "1-3" [9]
[10] "3-1" - "2-2" [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:311:3'): test_predictions, works with glmmTMB and w/o vcov ──
out1$Contrast (`actual`) not equal to c(0.06846, -0.87857, -0.79452, 0.30375, 1.48621) (`expected`).
`actual`: 0.304 0.827 -0.879 -0.228 -0.101
`expected`: 0.068 -0.879 -0.795 0.304 1.486
── Failure ('test-test_predictions.R:312:3'): test_predictions, works with glmmTMB and w/o vcov ──
out1$conf.low (`actual`) not equal to c(0.06846, -0.87857, -0.79452, 0.30375, 1.48621) (`expected`).
`actual`: 0.304 0.827 -0.879 -0.228 NA
`expected`: 0.068 -0.879 -0.795 0.304 1.486
── Failure ('test-test_predictions_emmeans.R:119:3'): test_predictions, engine emmeans, glm binomial ──
out1$Contrast (`actual`) not equal to out2$Contrast[4] (`expected`).
`actual`: -0.152
`expected`: -0.068
[ FAIL 6 | WARN 14 | SKIP 63 | PASS 653 ]
Deleting unused snapshots:
• backtransform_response/show-data-back-transformed-true.svg
• brms-monotonic/plot-brms-monotonic.svg
• plot-ordinal-latent/clm-latent-false.svg
• plot-ordinal-latent/clm-latent-true.svg
• plot-ordinal-latent/polr-latent-false.svg
• plot-ordinal-latent/polr-latent-true.svg
• plot/collapse-random-effects-works-again.svg
• plot/colored-data-points-with-special-focal-terms.svg
• plot/simple-plot-bw.svg
• plot/simple-plot-categorical-bw.svg
• plot/simple-plot-categorical-ci-bands-as-dots.svg
• plot/simple-plot-categorical-grey-scale.svg
• plot/simple-plot-categorical-no-ci.svg
• plot/simple-plot-categorical-show-data-jitter.svg
• plot/simple-plot-categorical-show-data.svg
• plot/simple-plot-categorical.svg
• plot/simple-plot-ci-bands-as-dots.svg
• plot/simple-plot-grey-scale.svg
• plot/simple-plot-no-ci.svg
• plot/simple-plot-show-data-jitter.svg
• plot/simple-plot-show-data.svg
• plot/simple-plot.svg
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc
Version: 1.7.2
Check: tests
Result: ERROR
Running ‘testthat.R’ [159s/215s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(ggeffects)
> test_check("ggeffects")
Model has log transformed response. Predictions are on transformed
scale.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
(Intercept) tensionM tensionH
36.38889 -10.00000 -14.72222
Not all rows are shown in the output. Use `print(..., n = Inf)` to show
all rows.
Not all rows are shown in the output. Use `print(..., n = Inf)` to show
all rows.
NOTE: Results may be misleading due to involvement in interactions
NOTE: Results may be misleading due to involvement in interactions
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Data points may overlap. Use the `jitter` argument to add some amount of
random variation to the location of data points and avoid overplotting.
Re-fitting to get Hessian
Re-fitting to get Hessian
Model contains splines or polynomial terms. Consider using `terms="mined
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="cover
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="mined
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Model contains splines or polynomial terms. Consider using `terms="cover
[all]"` to get smooth plots. See also package-vignette 'Adjusted
Predictions at Specific Values'.
Can't compute adjusted predictions, `effects::Effect()` returned an error.
Reason: Invalid operation on a survival time
You may try `ggpredict()` or `ggemmeans()`.
Can't compute adjusted predictions, `effects::Effect()` returned an error.
Reason: non-conformable arguments
You may try `ggpredict()` or `ggemmeans()`.
[ FAIL 6 | WARN 1 | SKIP 63 | PASS 653 ]
══ Skipped tests (63) ══════════════════════════════════════════════════════════
• On CRAN (55): 'test-MCMCglmm.R:1:1', 'test-MixMod.R:1:1',
'test-avg_predictions.R:24:3', 'test-avg_predictions.R:79:5',
'test-backtransform_response.R:76:5', 'test-bias_correction.R:1:1',
'test-brms-categ-cum.R:1:1', 'test-brms-monotonic.R:1:1',
'test-brms-ppd.R:1:1', 'test-brms-trial.R:1:1', 'test-clean_vars.R:1:1',
'test-clm.R:1:1', 'test-clm2.R:1:1', 'test-clmm.R:1:1',
'test-correct_se_sorting.R:1:1', 'test-decimals.R:1:1', 'test-fixest.R:1:1',
'test-focal_only_random.R:1:1', 'test-format.R:1:1', 'test-gamlss.R:1:1',
'test-gamm4.R:1:1', 'test-glmer.R:2:1', 'test-glmmTMB.R:1:1',
'test-interval_re.R:1:1', 'test-ivreg.R:1:1',
'test-johnson_neyman_numcat.R:1:1', 'test-list_terms.R:36:3',
'test-lmer.R:1:1', 'test-mgcv.R:1:1', 'test-plot-ordinal-latent.R:1:1',
'test-plot.R:69:1', 'test-polr.R:21:7', 'test-polr.R:60:7',
'test-pool_comparisons.R:1:1', 'test-print.R:1:1', 'test-print_digits.R:1:1',
'test-print_md.R:1:1', 'test-print_zero_inflation.R:1:1',
'test-resid_over_grid.R:33:5', 'test-rstanarm-ppd.R:1:1',
'test-rstanarm.R:1:1', 'test-sdmTMB.R:1:1', 'test-simulate.R:1:1',
'test-test_predictions-margin.R:1:1', 'test-test_predictions-mixed.R:1:1',
'test-test_predictions_emmeans.R:133:3',
'test-test_predictions_emmeans.R:168:3',
'test-test_predictions_ggeffects.R:140:3',
'test-test_predictions_ggeffects.R:172:3',
'test-test_predictions_ggeffects.R:181:3',
'test-test_predictions_ggeffects.R:224:5', 'test-vcov.R:1:1',
'test-vglm.R:1:1', 'test-zeroinfl.R:27:3', 'test-zi_prob.R:1:1'
• On Linux (5): 'test-ordinal.R:1:1', 'test-parsnip.R:94:3',
'test-print_subsets.R:1:1', 'test-print_test_predictions-ordinal.R:1:1',
'test-print_test_predictions.R:1:1'
• empty test (3): 'test-plot.R:8:1', 'test-polr.R:136:5', 'test-polr.R:142:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-test_predictions.R:157:3'): test_predictions, categorical, pairwise ──
out$Contrast (`actual`) not equal to c(...) (`expected`).
actual | expected
[1] -0.2051 - 0.4199 [1]
[2] 0.0666 - -0.2051 [2]
[3] 0.4199 - -0.1528 [3]
[4] -0.1528 - 0.0666 [4]
[5] 0.1187 | 0.1187 [5]
[6] 0.2718 - -0.6251 [6]
[7] 0.6251 - -0.5727 [7]
[8] 0.0524 - -0.3533 [8]
[9] 0.3239 - -0.3012 [9]
[10] 0.3533 - 0.0524 [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:166:3'): test_predictions, categorical, pairwise ──
out$groups (`actual`) not identical to c(...) (`expected`).
actual | expected
[1] "control-control" - "control-treatment" [1]
[2] "control-control" | "control-control" [2]
[3] "control-treatment" | "control-treatment" [3]
[4] "control-treatment" - "control-control" [4]
[5] "control-treatment" | "control-treatment" [5]
[6] "control-control" - "treatment-control" [6]
[7] "control-treatment" - "treatment-treatment" [7]
[8] "control-treatment" - "treatment-control" [8]
[9] "control-treatment" - "treatment-treatment" [9]
[10] "control-treatment" | "control-treatment" [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:176:3'): test_predictions, categorical, pairwise ──
out$episode (`actual`) not identical to c(...) (`expected`).
actual | expected
[1] "1-2" - "1-1" [1]
[2] "1-3" - "1-2" [2]
[3] "1-1" - "1-2" [3]
[4] "1-2" - "1-3" [4]
[5] "1-3" | "1-3" [5]
[6] "2-3" - "1-2" [6]
[7] "2-1" - "1-2" [7]
[8] "2-2" - "1-3" [8]
[9] "2-3" - "1-3" [9]
[10] "3-1" - "2-2" [10]
... ... ... and 5 more ...
── Failure ('test-test_predictions.R:311:3'): test_predictions, works with glmmTMB and w/o vcov ──
out1$Contrast (`actual`) not equal to c(0.06846, -0.87857, -0.79452, 0.30375, 1.48621) (`expected`).
`actual`: 0.304 0.827 -0.879 -0.228 -0.101
`expected`: 0.068 -0.879 -0.795 0.304 1.486
── Failure ('test-test_predictions.R:312:3'): test_predictions, works with glmmTMB and w/o vcov ──
out1$conf.low (`actual`) not equal to c(0.06846, -0.87857, -0.79452, 0.30375, 1.48621) (`expected`).
`actual`: 0.304 0.827 -0.879 -0.228 NA
`expected`: 0.068 -0.879 -0.795 0.304 1.486
── Failure ('test-test_predictions_emmeans.R:119:3'): test_predictions, engine emmeans, glm binomial ──
out1$Contrast (`actual`) not equal to out2$Contrast[4] (`expected`).
`actual`: -0.152
`expected`: -0.068
[ FAIL 6 | WARN 1 | SKIP 63 | PASS 653 ]
Deleting unused snapshots:
• backtransform_response/show-data-back-transformed-true.svg
• brms-monotonic/plot-brms-monotonic.svg
• plot-ordinal-latent/clm-latent-false.svg
• plot-ordinal-latent/clm-latent-true.svg
• plot-ordinal-latent/polr-latent-false.svg
• plot-ordinal-latent/polr-latent-true.svg
• plot/collapse-random-effects-works-again.svg
• plot/colored-data-points-with-special-focal-terms.svg
• plot/simple-plot-bw.svg
• plot/simple-plot-categorical-bw.svg
• plot/simple-plot-categorical-ci-bands-as-dots.svg
• plot/simple-plot-categorical-grey-scale.svg
• plot/simple-plot-categorical-no-ci.svg
• plot/simple-plot-categorical-show-data-jitter.svg
• plot/simple-plot-categorical-show-data.svg
• plot/simple-plot-categorical.svg
• plot/simple-plot-ci-bands-as-dots.svg
• plot/simple-plot-grey-scale.svg
• plot/simple-plot-no-ci.svg
• plot/simple-plot-show-data-jitter.svg
• plot/simple-plot-show-data.svg
• plot/simple-plot.svg
Error: Test failures
Execution halted
Flavor: r-patched-linux-x86_64
Current CRAN status: OK: 13
Current CRAN status: ERROR: 4, OK: 9
Version: 0.23.0
Check: tests
Result: ERROR
Running ‘testthat.R’ [119s/86s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(parameters)
> library(testthat)
>
> test_check("parameters")
Starting 2 test processes
[ FAIL 1 | WARN 0 | SKIP 112 | PASS 673 ]
══ Skipped tests (112) ═════════════════════════════════════════════════════════
• On CRAN (100): 'test-GLMMadaptive.R:1:1', 'test-backticks.R:1:1',
'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1',
'test-brms.R:1:1', 'test-compare_parameters.R:91:7',
'test-compare_parameters.R:95:5', 'test-complete_separation.R:15:7',
'test-complete_separation.R:27:7', 'test-complete_separation.R:40:7',
'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1',
'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3',
'test-equivalence_test.R:82:3', 'test-format_model_parameters2.R:2:3',
'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1',
'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3',
'test-glmmTMB.R:8:1', 'test-helper.R:1:1', 'test-include_reference.R:15:3',
'test-include_reference.R:67:3', 'test-ivreg.R:54:3', 'test-lmerTest.R:1:1',
'test-mipo.R:19:3', 'test-mipo.R:33:3', 'test-mmrm.R:1:1',
'test-model_parameters.anova.R:1:1', 'test-model_parameters.aov.R:1:1',
'test-marginaleffects.R:113:3', 'test-model_parameters.aov_es_ci.R:158:3',
'test-model_parameters.aov_es_ci.R:269:3',
'test-model_parameters.aov_es_ci.R:319:3',
'test-model_parameters.aov_es_ci.R:372:3',
'test-model_parameters.bracl.R:5:1', 'test-model_parameters.coxme.R:1:1',
'test-model_parameters.cgam.R:1:1', 'test-model_parameters.epi2x2.R:1:1',
'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3',
'test-model_parameters.fixest_multi.R:3:1',
'test-model_parameters.ggeffects.R:12:3',
'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.glm.R:40:3',
'test-model_parameters.glm.R:68:3', 'test-model_parameters.logistf.R:1:1',
'test-model_parameters.mclogit.R:5:1',
'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1',
'test-model_parameters.nnet.R:5:1', 'test-model_parameters_df.R:1:1',
'test-model_parameters.vgam.R:3:1', 'test-model_parameters_ordinal.R:1:1',
'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1',
'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3',
'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1',
'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1',
'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3',
'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1',
'test-posterior.R:2:1', 'test-plm.R:111:3', 'test-printing-stan.R:2:1',
'test-print_AER_labels.R:8:3', 'test-printing.R:1:1', 'test-quantreg.R:1:1',
'test-random_effects_ci.R:4:1', 'test-robust.R:2:1', 'test-rstanarm.R:3:1',
'test-serp.R:17:5', 'test-printing2.R:15:7', 'test-printing2.R:22:7',
'test-printing2.R:27:7', 'test-printing2.R:32:7', 'test-printing2.R:37:7',
'test-printing2.R:49:7', 'test-printing2.R:91:7', 'test-svylme.R:1:1',
'test-visualisation_recipe.R:7:3', 'test-weightit.R:23:3',
'test-weightit.R:43:3', 'test-standardize_parameters.R:31:3',
'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3',
'test-standardize_parameters.R:175:3', 'test-standardize_parameters.R:300:3',
'test-standardize_parameters.R:334:3', 'test-standardize_parameters.R:428:3',
'test-standardize_parameters.R:518:3'
• On Linux (5): 'test-model_parameters.BFBayesFactor.R:1:1',
'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1',
'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1'
• TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3'
• TODO: this one actually is not correct. (1):
'test-model_parameters_robust.R:129:3'
• empty test (5): 'test-wrs2.R:8:1', 'test-wrs2.R:18:1', 'test-wrs2.R:30:1',
'test-wrs2.R:43:1', 'test-wrs2.R:55:1'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-marginaleffects.R:12:3'): marginaleffects() ──────────────────
all(cols %in% colnames(out)) is not TRUE
`actual`: FALSE
`expected`: TRUE
[ FAIL 1 | WARN 0 | SKIP 112 | PASS 673 ]
Deleting unused snapshots:
• equivalence_test/equivalence-test-1.svg
• equivalence_test/equivalence-test-2.svg
• equivalence_test/equivalence-test-3.svg
• equivalence_test/equivalence-test-4.svg
• equivalence_test/equivalence-test-5.svg
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc
Version: 0.23.0
Check: tests
Result: ERROR
Running ‘testthat.R’ [343s/220s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(parameters)
> library(testthat)
>
> test_check("parameters")
Starting 2 test processes
[ FAIL 2 | WARN 4 | SKIP 112 | PASS 672 ]
══ Skipped tests (112) ═════════════════════════════════════════════════════════
• On CRAN (100): 'test-GLMMadaptive.R:1:1', 'test-backticks.R:1:1',
'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1',
'test-brms.R:1:1', 'test-compare_parameters.R:91:7',
'test-compare_parameters.R:95:5', 'test-complete_separation.R:15:7',
'test-complete_separation.R:27:7', 'test-complete_separation.R:40:7',
'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1',
'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3',
'test-equivalence_test.R:82:3', 'test-format_model_parameters2.R:2:3',
'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1',
'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3',
'test-glmmTMB.R:8:1', 'test-helper.R:1:1', 'test-ivreg.R:54:3',
'test-include_reference.R:15:3', 'test-include_reference.R:67:3',
'test-lmerTest.R:1:1', 'test-mipo.R:19:3', 'test-mipo.R:33:3',
'test-mmrm.R:1:1', 'test-model_parameters.anova.R:1:1',
'test-model_parameters.aov.R:1:1', 'test-model_parameters.aov_es_ci.R:158:3',
'test-model_parameters.aov_es_ci.R:269:3',
'test-model_parameters.aov_es_ci.R:319:3',
'test-model_parameters.aov_es_ci.R:372:3', 'test-marginaleffects.R:113:3',
'test-model_parameters.bracl.R:5:1', 'test-model_parameters.cgam.R:1:1',
'test-model_parameters.coxme.R:1:1', 'test-model_parameters.epi2x2.R:1:1',
'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3',
'test-model_parameters.fixest_multi.R:3:1',
'test-model_parameters.ggeffects.R:12:3',
'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.glm.R:40:3',
'test-model_parameters.glm.R:68:3', 'test-model_parameters.logistf.R:1:1',
'test-model_parameters.mclogit.R:5:1',
'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1',
'test-model_parameters.nnet.R:5:1', 'test-model_parameters.vgam.R:3:1',
'test-model_parameters_df.R:1:1', 'test-model_parameters_ordinal.R:1:1',
'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1',
'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3',
'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1',
'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1',
'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3',
'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-print_AER_labels.R:8:3',
'test-printing-stan.R:2:1', 'test-printing.R:1:1',
'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1',
'test-quantreg.R:1:1', 'test-random_effects_ci.R:4:1',
'test-printing2.R:15:7', 'test-printing2.R:22:7', 'test-printing2.R:27:7',
'test-printing2.R:32:7', 'test-printing2.R:37:7', 'test-printing2.R:49:7',
'test-printing2.R:91:7', 'test-robust.R:2:1', 'test-rstanarm.R:3:1',
'test-serp.R:17:5', 'test-svylme.R:1:1', 'test-visualisation_recipe.R:7:3',
'test-weightit.R:23:3', 'test-weightit.R:43:3',
'test-standardize_parameters.R:31:3', 'test-standardize_parameters.R:36:3',
'test-standardize_parameters.R:61:3', 'test-standardize_parameters.R:175:3',
'test-standardize_parameters.R:300:3', 'test-standardize_parameters.R:334:3',
'test-standardize_parameters.R:428:3', 'test-standardize_parameters.R:518:3'
• On Linux (5): 'test-model_parameters.BFBayesFactor.R:1:1',
'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1',
'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1'
• TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3'
• TODO: this one actually is not correct. (1):
'test-model_parameters_robust.R:129:3'
• empty test (5): 'test-wrs2.R:8:1', 'test-wrs2.R:18:1', 'test-wrs2.R:30:1',
'test-wrs2.R:43:1', 'test-wrs2.R:55:1'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-compare_parameters.R:56:7'): compare_parameters, se_p2 ───────
`out` has length 44, not length 14.
── Failure ('test-marginaleffects.R:12:3'): marginaleffects() ──────────────────
all(cols %in% colnames(out)) is not TRUE
`actual`: FALSE
`expected`: TRUE
[ FAIL 2 | WARN 4 | SKIP 112 | PASS 672 ]
Deleting unused snapshots:
• equivalence_test/equivalence-test-1.svg
• equivalence_test/equivalence-test-2.svg
• equivalence_test/equivalence-test-3.svg
• equivalence_test/equivalence-test-4.svg
• equivalence_test/equivalence-test-5.svg
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang
Version: 0.23.0
Check: tests
Result: ERROR
Running ‘testthat.R’ [319s/231s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(parameters)
> library(testthat)
>
> test_check("parameters")
Starting 2 test processes
[ FAIL 2 | WARN 4 | SKIP 112 | PASS 672 ]
══ Skipped tests (112) ═════════════════════════════════════════════════════════
• On CRAN (100): 'test-GLMMadaptive.R:1:1', 'test-backticks.R:1:1',
'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1',
'test-brms.R:1:1', 'test-compare_parameters.R:91:7',
'test-compare_parameters.R:95:5', 'test-complete_separation.R:15:7',
'test-complete_separation.R:27:7', 'test-complete_separation.R:40:7',
'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1',
'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3',
'test-equivalence_test.R:82:3', 'test-format_model_parameters2.R:2:3',
'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1',
'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3',
'test-glmmTMB.R:8:1', 'test-helper.R:1:1', 'test-ivreg.R:54:3',
'test-include_reference.R:15:3', 'test-include_reference.R:67:3',
'test-lmerTest.R:1:1', 'test-mipo.R:19:3', 'test-mipo.R:33:3',
'test-mmrm.R:1:1', 'test-model_parameters.anova.R:1:1',
'test-model_parameters.aov.R:1:1', 'test-model_parameters.aov_es_ci.R:158:3',
'test-model_parameters.aov_es_ci.R:269:3',
'test-model_parameters.aov_es_ci.R:319:3',
'test-model_parameters.aov_es_ci.R:372:3',
'test-model_parameters.bracl.R:5:1', 'test-model_parameters.cgam.R:1:1',
'test-model_parameters.coxme.R:1:1', 'test-marginaleffects.R:113:3',
'test-model_parameters.epi2x2.R:1:1', 'test-model_parameters.fixest.R:2:3',
'test-model_parameters.fixest.R:77:3',
'test-model_parameters.fixest_multi.R:3:1',
'test-model_parameters.ggeffects.R:12:3',
'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.glm.R:40:3',
'test-model_parameters.glm.R:68:3', 'test-model_parameters.logistf.R:1:1',
'test-model_parameters.mclogit.R:5:1',
'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1',
'test-model_parameters.nnet.R:5:1', 'test-model_parameters_df.R:1:1',
'test-model_parameters.vgam.R:3:1', 'test-model_parameters_ordinal.R:1:1',
'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1',
'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3',
'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1',
'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1',
'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3',
'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1',
'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-print_AER_labels.R:8:3',
'test-printing-stan.R:2:1', 'test-printing.R:1:1', 'test-quantreg.R:1:1',
'test-random_effects_ci.R:4:1', 'test-robust.R:2:1', 'test-rstanarm.R:3:1',
'test-serp.R:17:5', 'test-printing2.R:15:7', 'test-printing2.R:22:7',
'test-printing2.R:27:7', 'test-printing2.R:32:7', 'test-printing2.R:37:7',
'test-printing2.R:49:7', 'test-printing2.R:91:7', 'test-svylme.R:1:1',
'test-visualisation_recipe.R:7:3', 'test-weightit.R:23:3',
'test-weightit.R:43:3', 'test-standardize_parameters.R:31:3',
'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3',
'test-standardize_parameters.R:175:3', 'test-standardize_parameters.R:300:3',
'test-standardize_parameters.R:334:3', 'test-standardize_parameters.R:428:3',
'test-standardize_parameters.R:518:3'
• On Linux (5): 'test-model_parameters.BFBayesFactor.R:1:1',
'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1',
'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1'
• TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3'
• TODO: this one actually is not correct. (1):
'test-model_parameters_robust.R:129:3'
• empty test (5): 'test-wrs2.R:8:1', 'test-wrs2.R:18:1', 'test-wrs2.R:30:1',
'test-wrs2.R:43:1', 'test-wrs2.R:55:1'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-compare_parameters.R:56:7'): compare_parameters, se_p2 ───────
`out` has length 44, not length 14.
── Failure ('test-marginaleffects.R:12:3'): marginaleffects() ──────────────────
all(cols %in% colnames(out)) is not TRUE
`actual`: FALSE
`expected`: TRUE
[ FAIL 2 | WARN 4 | SKIP 112 | PASS 672 ]
Deleting unused snapshots:
• equivalence_test/equivalence-test-1.svg
• equivalence_test/equivalence-test-2.svg
• equivalence_test/equivalence-test-3.svg
• equivalence_test/equivalence-test-4.svg
• equivalence_test/equivalence-test-5.svg
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc
Version: 0.23.0
Check: tests
Result: ERROR
Running ‘testthat.R’ [151s/85s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(parameters)
> library(testthat)
>
> test_check("parameters")
Starting 2 test processes
[ FAIL 1 | WARN 0 | SKIP 112 | PASS 673 ]
══ Skipped tests (112) ═════════════════════════════════════════════════════════
• On CRAN (100): 'test-GLMMadaptive.R:1:1', 'test-backticks.R:1:1',
'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1',
'test-brms.R:1:1', 'test-compare_parameters.R:91:7',
'test-compare_parameters.R:95:5', 'test-complete_separation.R:15:7',
'test-complete_separation.R:27:7', 'test-complete_separation.R:40:7',
'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1',
'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3',
'test-equivalence_test.R:82:3', 'test-format_model_parameters2.R:2:3',
'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1',
'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3',
'test-glmmTMB.R:8:1', 'test-helper.R:1:1', 'test-ivreg.R:54:3',
'test-include_reference.R:15:3', 'test-include_reference.R:67:3',
'test-lmerTest.R:1:1', 'test-mipo.R:19:3', 'test-mipo.R:33:3',
'test-mmrm.R:1:1', 'test-model_parameters.anova.R:1:1',
'test-model_parameters.aov.R:1:1', 'test-model_parameters.aov_es_ci.R:158:3',
'test-model_parameters.aov_es_ci.R:269:3',
'test-model_parameters.aov_es_ci.R:319:3',
'test-model_parameters.aov_es_ci.R:372:3', 'test-marginaleffects.R:113:3',
'test-model_parameters.bracl.R:5:1', 'test-model_parameters.cgam.R:1:1',
'test-model_parameters.coxme.R:1:1', 'test-model_parameters.epi2x2.R:1:1',
'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3',
'test-model_parameters.fixest_multi.R:3:1',
'test-model_parameters.ggeffects.R:12:3',
'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.glm.R:40:3',
'test-model_parameters.glm.R:68:3', 'test-model_parameters.logistf.R:1:1',
'test-model_parameters.mclogit.R:5:1',
'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1',
'test-model_parameters.nnet.R:5:1', 'test-model_parameters_df.R:1:1',
'test-model_parameters.vgam.R:3:1', 'test-model_parameters_ordinal.R:1:1',
'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1',
'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3',
'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1',
'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1',
'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3',
'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-print_AER_labels.R:8:3',
'test-printing-stan.R:2:1', 'test-printing.R:1:1',
'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1',
'test-quantreg.R:1:1', 'test-random_effects_ci.R:4:1', 'test-robust.R:2:1',
'test-rstanarm.R:3:1', 'test-printing2.R:15:7', 'test-printing2.R:22:7',
'test-printing2.R:27:7', 'test-printing2.R:32:7', 'test-printing2.R:37:7',
'test-printing2.R:49:7', 'test-printing2.R:91:7', 'test-serp.R:17:5',
'test-svylme.R:1:1', 'test-visualisation_recipe.R:7:3',
'test-weightit.R:23:3', 'test-weightit.R:43:3',
'test-standardize_parameters.R:31:3', 'test-standardize_parameters.R:36:3',
'test-standardize_parameters.R:61:3', 'test-standardize_parameters.R:175:3',
'test-standardize_parameters.R:300:3', 'test-standardize_parameters.R:334:3',
'test-standardize_parameters.R:428:3', 'test-standardize_parameters.R:518:3'
• On Linux (5): 'test-model_parameters.BFBayesFactor.R:1:1',
'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1',
'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1'
• TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3'
• TODO: this one actually is not correct. (1):
'test-model_parameters_robust.R:129:3'
• empty test (5): 'test-wrs2.R:8:1', 'test-wrs2.R:18:1', 'test-wrs2.R:30:1',
'test-wrs2.R:43:1', 'test-wrs2.R:55:1'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-marginaleffects.R:12:3'): marginaleffects() ──────────────────
all(cols %in% colnames(out)) is not TRUE
`actual`: FALSE
`expected`: TRUE
[ FAIL 1 | WARN 0 | SKIP 112 | PASS 673 ]
Deleting unused snapshots:
• equivalence_test/equivalence-test-1.svg
• equivalence_test/equivalence-test-2.svg
• equivalence_test/equivalence-test-3.svg
• equivalence_test/equivalence-test-4.svg
• equivalence_test/equivalence-test-5.svg
Error: Test failures
Execution halted
Flavor: r-patched-linux-x86_64
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: NOTE: 3, OK: 10
Version: 2.8.10
Check: Rd cross-references
Result: NOTE
Found the following Rd file(s) with Rd \link{} targets missing package
anchors:
to_value.Rd: set_labels
Please provide package anchors for all Rd \link{} targets not in the
package itself and the base packages.
Flavors: r-devel-linux-x86_64-debian-clang, r-devel-linux-x86_64-debian-gcc, r-devel-windows-x86_64
Current CRAN status: OK: 13
Current CRAN status: OK: 13