Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Konietschke, F."

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    Nonparametric multiple comparisons and simultaneous confidence intervals for multivariate designs
    (4th International Research Symposium on Pure and Applied Sciences, Faculty of Science, University of Kelaniya, Sri Lanka, 2019) Gunawardana, A.; Konietschke, F.
    Over the last half-century, the use of multivariate designs has grown rapidly in many scientific disciplines. Such designs can have more than two possibly correlated response variables (endpoints) observed on each experimental unit and should allow comparisons across different treatment groups. Existing parametric tests in multivariate data analysis are based on the assumption that the observations follow multivariate normal distributions with equal covariance matrices across the groups. Such assumptions, however, are impossible to justify in real observations, e.g., for skewed data or ordered categorical data. In fact, existing parametric methods that rely on the assumption of equal covariance matrices tend to be highly liberal or conservative when the covariance matrices of the different groups are actually different. Therefore, a nonparametric approach is desirable that is valid even when covariance matrices are different – even under the null hypothesis of no treatment effect. In this study, purely nonparametric methods that overcome the existing gaps have been introduced. The procedures are robust in the sense that they assume neither any specific data distribution nor identical covariance matrices across the treatment groups, flexible in the sense that the inference method can be adjusted to specific research questions and in particular, the methods are consonant, coherent and compatible. To test hypotheses formulated in terms of purely nonparametric treatment effects, pseudo-rank based multiple tests are derived. The results are achieved by computing the distribution of normalized rank-means under general but fixed alternatives. Instead of using quadratic forms as test statistics, the t-test type statistics are used and the joint distribution of them has been computed in a closed form, asymptotically. Small sample size approximations using methods-of-moments by multivariate t-approximation achieve accurate control of the multiple type-I error rate of the methods and comparable power to existing global testing procedures. To illustrate the application of the proposed tests, a part of an immunotoxicity study on the effects of silicone on the immune system is considered. There were three treatment groups of mice involved in the study and five clinical chemistry endpoints were measured on each mouse after the treatment. To answer the main question, that is, quantifying (significant) differences between the treatment groups under each endpoint for making biological conclusions on the effects of silicone, the multiple hypotheses are tested using many-to-one comparisons
  • No Thumbnail Available
    Item
    Wild bootstrapping rank-based procedure: Multiple testing on multivariate data
    (Faculty of Science, University of Kelaniya, Sri Lanka, 2020) Gunawardana, A.; Konietschke, F.
    Multivariate data occur in many scientific applications, for example in agriculture, biology, clinical studies in medicine, or in social sciences. They are apparent if two or more possibly correlated response variables are measured on the same experimental unit. Besides, in a study design, the experimental units might be stratified into several treatment groups. Such a design is called a multivariate factorial design and should allow comparisons across different treatment groups. In statistical practice, the evaluation of a multivariate factorial design does not only include the question of whether there is a treatment effect between the groups in any of the responses but, if such a treatment effect is observed, between which groups and under which responses those differences exist. That is, testing only the global null hypothesis (all treatment groups have the same effect across all responses) is not of interest but in particular, multiple comparisons between the treatment groups are also of practical importance. To date, the available nonparametric methods of multivariate analysis are used to test hypotheses formulated in terms of the distribution functions of the data and thus, assume identical covariance matrices across the groups. Moreover, they cannot provide adjusted p-values and compatible simultaneous confidence intervals (SCIs) for the multiple tests. In the present work, rank-based tests that overcome the existing gaps have been derived to test hypotheses formulated in terms of purely nonparametric treatment effects. Thus, the new approaches can be used for testing the global null hypothesis as well as for performing multiple comparisons and for the computation of compatible SCIs. Due to the complexity of multivariate factorial designs and usually apparent small sample sizes in statistical practice, small sample size approximations of the test statistics are of particular importance. Therefore, a modern resampling method, namely, a wild bootstrap approach has been introduced. It can be seen from the resampling algorithm that the resampling version of the test statistic does not require the estimation of the correlation matrix of the test statistics. Also, the critical values from the resampling distribution are used in the construction of rank-based multiple contrast tests and SCIs. The asymptotic validity of the wild bootstrap approach has been derived and its behavior was analyzed in an extensive simulation study where different data distributions with different covariance structures and sample sizes were considered. The simulation results show that the wild bootstrap method tends to be more robust, controls the multiple type-I error rate quite accurately, and has comparable power compared to rank-based MANOVA-type tests in all the investigated scenarios. Furthermore, a real data example illustrates the application of the proposed tests.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify