* using log directory 'd:/Rcompile/CRANpkg/local/4.5/colleyRstats.Rcheck' * using R version 4.5.3 (2026-03-11 ucrt) * using platform: x86_64-w64-mingw32 * R was compiled by gcc.exe (GCC) 14.3.0 GNU Fortran (GCC) 14.3.0 * running under: Windows Server 2022 x64 (build 20348) * using session charset: UTF-8 * checking for file 'colleyRstats/DESCRIPTION' ... OK * checking extension type ... Package * this is package 'colleyRstats' version '0.0.2' * package encoding: UTF-8 * checking package namespace information ... OK * checking package dependencies ... OK * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for hidden files and directories ... OK * checking for portable file names ... OK * checking whether package 'colleyRstats' can be installed ... OK * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK * checking package subdirectories ... OK * checking code files for non-ASCII characters ... OK * checking R files for syntax errors ... OK * checking whether the package can be loaded ... [6s] OK * checking whether the package can be loaded with stated dependencies ... [5s] OK * checking whether the package can be unloaded cleanly ... [6s] OK * checking whether the namespace can be loaded with stated dependencies ... [6s] OK * checking whether the namespace can be unloaded cleanly ... [7s] OK * checking loading without being on the library search path ... [6s] OK * checking use of S3 registration ... OK * checking dependencies in R code ... WARNING Missing or unexported object: 'ggstatsplot::pairwise_comparisons' * checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK * checking R code for possible problems ... [18s] OK * checking Rd files ... [1s] OK * checking Rd metadata ... OK * checking Rd cross-references ... OK * checking for missing documentation entries ... OK * checking for code/documentation mismatches ... OK * checking Rd \usage sections ... OK * checking Rd contents ... OK * checking for unstated dependencies in examples ... OK * checking examples ... [24s] OK * checking for unstated dependencies in 'tests' ... OK * checking tests ... [20s] ERROR Running 'testthat.R' [20s] Running the tests in 'tests/testthat.R' failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(colleyRstats) > > test_check("colleyRstats") Scale for x is already present. Adding another scale for x, which will replace the existing scale. Saving _problems/test-plotting-115.R Saving _problems/test-plotting-132.R The NPAV found a significant main effect of \Video on mental workload (\F{1}{10}{6.12}, \p{0.033}), $\eta_{p}^{2}$=0.38 [0.02, 1.00]. The NPAV found a significant interaction effect of \gesture $\times$ \eHMI on mental workload (\F{1}{10}{5.01}, \p{0.045}), $\eta_{p}^{2}$=0.33 [0.00, 1.00]. The ART found a significant interaction effect of \gesture $\times$ \eHMI on mental demand (\F{1}{10}{5.01}, \p{0.045}, $\eta_{p}^{2}$ = 0.33, 95\% CI: [0.00, 1.00]). %B: \m{-0.44}, \sd{0.90} Kruskal-Wallis rank sum test data: x and g Kruskal-Wallis chi-squared = 96.9374, df = 2, p-value = 0 Dunn's Pairwise Comparison of x by g (Holm) Col Mean-│ Row Mean │ setosa versicol ─────────┼────────────────────── versicol │ -6.106326 │ 0.0000* │ virginic │ -9.741784 -3.635458 │ 0.0000* 0.0003* FWER = 0.05 Reject Ho if p ≤ FWER with stopping rule, where p = Pr(|Z| ≥ |z|) A post-hoc test found that Sepal.Length for the \Species virginica was significantly higher (\m{6.59}, \sd{0.64}) than for setosa (\m{5.01}, \sd{0.35}; \padjminor{0.001}, \rankbiserial{0.97}) and versicolor (\m{5.94}, \sd{0.52}; \padjminor{0.001}, \rankbiserial{0.58}). % latex table generated in R 4.5.3 by xtable 1.8-8 package % Fri Apr 24 05:41:53 2026 \begin{table}[ht] \centering \caption{Post-hoc comparisons for independent variable \Species and dependent variable \Sepal.Length. Positive Z-values mean that the first-named level is sig. higher than the second-named. For negative Z-values, the opposite is true. Effect size reported as rank-biserial correlation (r).} \label{tab:posthoc-Species-Sepal.Length} \begingroup\small \begin{tabular}{lrll} \hline Comparison & Z & p-adjusted & r \\ \hline setosa - versicolor & -6.1063 & $<$0.001 & 0.87 \\ setosa - virginica & -9.7418 & $<$0.001 & 0.97 \\ versicolor - virginica & -3.6355 & $<$0.001 & 0.58 \\ \hline \end{tabular} \endgroup \end{table} Kruskal-Wallis rank sum test data: x and g Kruskal-Wallis chi-squared = 96.9374, df = 2, p-value = 0 Dunn's Pairwise Comparison of x by g (Holm) Col Mean-│ Row Mean │ setosa versicol ─────────┼────────────────────── versicol │ -6.106326 │ 0.0000* │ virginic │ -9.741784 -3.635458 │ 0.0000* 0.0003* FWER = 0.05 Reject Ho if p ≤ FWER with stopping rule, where p = Pr(|Z| ≥ |z|) % latex table generated in R 4.5.3 by xtable 1.8-8 package % Fri Apr 24 05:41:54 2026 \begin{table}[ht] \centering \caption{Post-hoc comparisons for independent variable \Species and dependent variable \Sepal.Length. Positive Z-values mean that the first-named level is sig. higher than the second-named. For negative Z-values, the opposite is true. Effect size reported as rank-biserial correlation (r).} \label{tab:posthoc-Species-Sepal.Length} \begingroup\small \begin{tabular}{lrll} \hline Comparison & Z & p-adjusted & r \\ \hline setosa - versicolor & -6.1063 & $<$0.001 & 0.87 \\ setosa - virginica & -9.7418 & $<$0.001 & 0.97 \\ versicolor - virginica & -3.6355 & $<$0.001 & 0.58 \\ \hline \end{tabular} \endgroup \end{table} [ FAIL 2 | WARN 3 | SKIP 0 | PASS 62 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-plotting.R:109:3'): ggbetweenstatsWithPriorNormalityCheckAsterisk returns a ggplot object ── Error: 'pairwise_comparisons' is not an exported object from 'namespace:ggstatsplot' Backtrace: ▆ 1. └─colleyRstats::ggbetweenstatsWithPriorNormalityCheckAsterisk(...) at test-plotting.R:109:3 2. ├─dplyr::filter(...) 3. ├─dplyr::mutate(...) 4. ├─dplyr::arrange(...) 5. └─dplyr::mutate(...) ── Error ('test-plotting.R:126:3'): ggwithinstatsWithPriorNormalityCheckAsterisk returns a ggplot object ── Error: 'pairwise_comparisons' is not an exported object from 'namespace:ggstatsplot' Backtrace: ▆ 1. └─colleyRstats::ggwithinstatsWithPriorNormalityCheckAsterisk(...) at test-plotting.R:126:3 2. ├─dplyr::filter(...) 3. ├─dplyr::mutate(...) 4. ├─dplyr::arrange(...) 5. └─dplyr::mutate(...) [ FAIL 2 | WARN 3 | SKIP 0 | PASS 62 ] Error: ! Test failures. Execution halted * checking PDF version of manual ... [23s] OK * checking HTML version of manual ... [7s] OK * DONE Status: 1 ERROR, 1 WARNING