Nonparametric statistics: Difference between revisions
m Substing templates: {{Format ISBN}}. See User:AnomieBOT/docs/TemplateSubster for info. |
|||
(8 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description| |
{{Short description|Type of statistical analysis}} |
||
'''Nonparametric statistics''' is a type of statistical analysis that makes minimal assumptions about the underlying [[Distribution (mathematics)|distribution]] of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as is parametric statistics.<ref>{{Cite journal |date=2006 |title=All of Nonparametric Statistics |url=https://link.springer.com/book/10.1007/0-387-30623-4 |journal=Springer Texts in Statistics |language=en |doi=10.1007/0-387-30623-4}}</ref> Nonparametric statistics can be used for [[descriptive statistics]] or [[statistical inference]]. Nonparametric tests are often used when the assumptions of parametric tests are evidently violated.<ref>{{cite journal|last1=Pearce|first1=J|last2=Derrick|first2=B|title= Preliminary testing: The devil of statistics?|journal= Reinvention: An International Journal of Undergraduate Research |date=2019|volume=12|issue=2|doi=10.31273/reinvention.v12i2.339|doi-access=free}}</ref> |
'''Nonparametric statistics''' is a type of statistical analysis that makes minimal assumptions about the underlying [[Distribution (mathematics)|distribution]] of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as is [[parametric statistics]].<ref>{{Cite journal |date=2006 |title=All of Nonparametric Statistics |url=https://link.springer.com/book/10.1007/0-387-30623-4 |journal=Springer Texts in Statistics |language=en |doi=10.1007/0-387-30623-4|isbn=978-0-387-25145-5 }}</ref> Nonparametric statistics can be used for [[descriptive statistics]] or [[statistical inference]]. Nonparametric tests are often used when the assumptions of parametric tests are evidently violated.<ref>{{cite journal|last1=Pearce|first1=J|last2=Derrick|first2=B|title= Preliminary testing: The devil of statistics?|journal= Reinvention: An International Journal of Undergraduate Research |date=2019|volume=12|issue=2|doi=10.31273/reinvention.v12i2.339|doi-access=free}}</ref> |
||
==Definitions== |
==Definitions== |
||
Line 6: | Line 6: | ||
The term "nonparametric statistics" has been defined imprecisely in the following two ways, among others: |
The term "nonparametric statistics" has been defined imprecisely in the following two ways, among others: |
||
⚫ | |||
{{ordered list<!-- We need to use ordered list otherwise the numbers get reset to one on the second item --> |
|||
⚫ | |||
These include, among others: |
These include, among others: |
||
* Methods which are ''distribution-free'', which do not rely on assumptions that the data are drawn from a given parametric family of [[probability distributions]]. |
* Methods which are ''distribution-free'', which do not rely on assumptions that the data are drawn from a given parametric family of [[probability distributions]]. |
||
Line 22: | Line 21: | ||
Hypothesis (c) was of a different nature, as no parameter values are specified in the statement of the hypothesis; we might reasonably call such a hypothesis ''non-parametric''. Hypothesis (d) is also non-parametric but, in addition, it does not even specify the underlying form of the distribution and may now be reasonably termed ''distribution-free''. Notwithstanding these distinctions, the statistical literature now commonly applies the label "non-parametric" to test procedures that we have just termed "distribution-free", thereby losing a useful classification. |
Hypothesis (c) was of a different nature, as no parameter values are specified in the statement of the hypothesis; we might reasonably call such a hypothesis ''non-parametric''. Hypothesis (d) is also non-parametric but, in addition, it does not even specify the underlying form of the distribution and may now be reasonably termed ''distribution-free''. Notwithstanding these distinctions, the statistical literature now commonly applies the label "non-parametric" to test procedures that we have just termed "distribution-free", thereby losing a useful classification. |
||
</blockquote> |
</blockquote> |
||
The second meaning of ''non-parametric'' involves techniques that do not assume that the ''structure'' of a model is fixed. Typically, the model grows in size to accommodate the complexity of the data. In these techniques, individual variables ''are'' typically assumed to belong to parametric distributions, and assumptions about the types of associations among variables are also made. These techniques include, among others: |
|||
* ''[[nonparametric regression|non-parametric regression]]'', which is modeling whereby the structure of the relationship between variables is treated non-parametrically, but where nevertheless there may be parametric assumptions about the distribution of model residuals. |
* ''[[nonparametric regression|non-parametric regression]]'', which is modeling whereby the structure of the relationship between variables is treated non-parametrically, but where nevertheless there may be parametric assumptions about the distribution of model residuals. |
||
* ''non-parametric hierarchical Bayesian models'', such as models based on the [[Dirichlet process]], which allow the number of [[latent variables]] to grow as necessary to fit the data, but where individual variables still follow parametric distributions and even the process controlling the rate of growth of latent variables follows a parametric distribution. |
* ''non-parametric hierarchical Bayesian models'', such as models based on the [[Dirichlet process]], which allow the number of [[latent variables]] to grow as necessary to fit the data, but where individual variables still follow parametric distributions and even the process controlling the rate of growth of latent variables follows a parametric distribution. |
||
}} |
|||
==Applications and purpose== |
==Applications and purpose== |
||
Non-parametric methods are widely used for studying populations that have a ranked order (such as movie reviews receiving one to |
Non-parametric methods are widely used for studying populations that have a ranked order (such as movie reviews receiving one to five "stars"). The use of non-parametric methods may be necessary when data have a [[ranking]] but no clear [[Number|numerical]] interpretation, such as when assessing [[preferences]]. In terms of [[level of measurement|levels of measurement]], non-parametric methods result in [[ordinal data]]. |
||
As non-parametric methods make fewer assumptions, their applicability is much more general than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more [[Robust statistics#Introduction|robust]]. |
As non-parametric methods make fewer assumptions, their applicability is much more general than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more [[Robust statistics#Introduction|robust]]. |
||
Line 34: | Line 33: | ||
Non-parametric methods are sometimes considered simpler to use and more robust than parametric methods, even when the assumptions of parametric methods are justified. This is due to their more general nature, which may make them less susceptible to misuse and misunderstanding. Non-parametric methods can be considered a conservative choice, as they will work even when their assumptions are not met, whereas parametric methods can produce misleading results when their assumptions are violated. |
Non-parametric methods are sometimes considered simpler to use and more robust than parametric methods, even when the assumptions of parametric methods are justified. This is due to their more general nature, which may make them less susceptible to misuse and misunderstanding. Non-parametric methods can be considered a conservative choice, as they will work even when their assumptions are not met, whereas parametric methods can produce misleading results when their assumptions are violated. |
||
The wider applicability and increased [[Robust statistics|robustness]] of non-parametric tests comes at a cost: in cases where a parametric test |
The wider applicability and increased [[Robust statistics|robustness]] of non-parametric tests comes at a cost: in cases where a parametric test's assumptions are met, non-parametric tests have less [[statistical power]]. In other words, a larger sample size can be required to draw conclusions with the same degree of confidence. |
||
==Non-parametric models== |
==Non-parametric models== |
||
Line 48: | Line 47: | ||
==Methods== |
==Methods== |
||
'''Non-parametric''' (or '''distribution-free''') '''inferential statistical methods''' are mathematical procedures for statistical hypothesis testing which, unlike [[parametric statistics]], make no assumptions about the [[probability distribution]]s of the variables being assessed. The most frequently used tests include |
'''Non-parametric''' (or '''distribution-free''') '''inferential statistical methods''' are mathematical procedures for statistical hypothesis testing which, unlike [[parametric statistics]], make no assumptions about the [[probability distribution]]s of the variables being assessed. The most frequently used tests include |
||
{{columns-list|colwidth=50em| |
|||
* [[Analysis of similarities]] |
* [[Analysis of similarities]] |
||
* [[Anderson–Darling test]]: tests whether a sample is drawn from a given distribution |
* [[Anderson–Darling test]]: tests whether a sample is drawn from a given distribution |
||
Line 54: | Line 52: | ||
* [[Cochran's Q test|Cochran's Q]]: tests whether ''k'' treatments in randomized block designs with 0/1 outcomes have identical effects |
* [[Cochran's Q test|Cochran's Q]]: tests whether ''k'' treatments in randomized block designs with 0/1 outcomes have identical effects |
||
* [[Cohen's kappa]]: measures inter-rater agreement for categorical items |
* [[Cohen's kappa]]: measures inter-rater agreement for categorical items |
||
* [[Friedman test|Friedman two-way analysis of variance]] by ranks: tests whether ''k'' treatments in randomized block designs have identical effects |
* [[Friedman test|Friedman two-way analysis of variance (Repeated Measures)]] by ranks: tests whether ''k'' treatments in randomized block designs have identical effects |
||
* [[Empirical likelihood]] |
* [[Empirical likelihood]] |
||
* [[Kaplan–Meier estimator|Kaplan–Meier]]: estimates the survival function from lifetime data, modeling censoring |
* [[Kaplan–Meier estimator|Kaplan–Meier]]: estimates the survival function from lifetime data, modeling censoring |
||
Line 75: | Line 73: | ||
* [[Wald–Wolfowitz runs test]]: tests whether the elements of a sequence are mutually independent/random. |
* [[Wald–Wolfowitz runs test]]: tests whether the elements of a sequence are mutually independent/random. |
||
* [[Wilcoxon signed-rank test]]: tests whether matched pair samples are drawn from populations with different mean ranks. |
* [[Wilcoxon signed-rank test]]: tests whether matched pair samples are drawn from populations with different mean ranks. |
||
* Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation.<ref>{{cite journal |last1=Adikaram |first1=K. K. L. B. |last2=Hussein |first2=M. A. |last3=Effenberger |first3=M. |last4=Becker |first4=T. |title=Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation |journal=PLOS ONE |date=16 November 2015 |volume=10 |issue=11 |pages=e0141486 |doi=10.1371/journal.pone.0141486 |doi-access=free |pmid=26571035 |pmc=4646355 |bibcode=2015PLoSO..1041486A |language=en |issn=1932-6203}}</ref> |
|||
}} |
|||
==History== |
==History== |
Latest revision as of 07:44, 11 November 2024
Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlying distribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as is parametric statistics.[1] Nonparametric statistics can be used for descriptive statistics or statistical inference. Nonparametric tests are often used when the assumptions of parametric tests are evidently violated.[2]
Definitions
[edit]The term "nonparametric statistics" has been defined imprecisely in the following two ways, among others:
The first meaning of nonparametric involves techniques that do not rely on data belonging to any particular parametric family of probability distributions. These include, among others:
- Methods which are distribution-free, which do not rely on assumptions that the data are drawn from a given parametric family of probability distributions.
- Statistics defined to be a function on a sample, without dependency on a parameter.
An example is Order statistics, which are based on ordinal ranking of observations.
The discussion following is taken from Kendall's Advanced Theory of Statistics.[3]
Statistical hypotheses concern the behavior of observable random variables.... For example, the hypothesis (a) that a normal distribution has a specified mean and variance is statistical; so is the hypothesis (b) that it has a given mean but unspecified variance; so is the hypothesis (c) that a distribution is of normal form with both mean and variance unspecified; finally, so is the hypothesis (d) that two unspecified continuous distributions are identical.
It will have been noticed that in the examples (a) and (b) the distribution underlying the observations was taken to be of a certain form (the normal) and the hypothesis was concerned entirely with the value of one or both of its parameters. Such a hypothesis, for obvious reasons, is called parametric.
Hypothesis (c) was of a different nature, as no parameter values are specified in the statement of the hypothesis; we might reasonably call such a hypothesis non-parametric. Hypothesis (d) is also non-parametric but, in addition, it does not even specify the underlying form of the distribution and may now be reasonably termed distribution-free. Notwithstanding these distinctions, the statistical literature now commonly applies the label "non-parametric" to test procedures that we have just termed "distribution-free", thereby losing a useful classification.
The second meaning of non-parametric involves techniques that do not assume that the structure of a model is fixed. Typically, the model grows in size to accommodate the complexity of the data. In these techniques, individual variables are typically assumed to belong to parametric distributions, and assumptions about the types of associations among variables are also made. These techniques include, among others:
- non-parametric regression, which is modeling whereby the structure of the relationship between variables is treated non-parametrically, but where nevertheless there may be parametric assumptions about the distribution of model residuals.
- non-parametric hierarchical Bayesian models, such as models based on the Dirichlet process, which allow the number of latent variables to grow as necessary to fit the data, but where individual variables still follow parametric distributions and even the process controlling the rate of growth of latent variables follows a parametric distribution.
Applications and purpose
[edit]Non-parametric methods are widely used for studying populations that have a ranked order (such as movie reviews receiving one to five "stars"). The use of non-parametric methods may be necessary when data have a ranking but no clear numerical interpretation, such as when assessing preferences. In terms of levels of measurement, non-parametric methods result in ordinal data.
As non-parametric methods make fewer assumptions, their applicability is much more general than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more robust.
Non-parametric methods are sometimes considered simpler to use and more robust than parametric methods, even when the assumptions of parametric methods are justified. This is due to their more general nature, which may make them less susceptible to misuse and misunderstanding. Non-parametric methods can be considered a conservative choice, as they will work even when their assumptions are not met, whereas parametric methods can produce misleading results when their assumptions are violated.
The wider applicability and increased robustness of non-parametric tests comes at a cost: in cases where a parametric test's assumptions are met, non-parametric tests have less statistical power. In other words, a larger sample size can be required to draw conclusions with the same degree of confidence.
Non-parametric models
[edit]Non-parametric models differ from parametric models in that the model structure is not specified a priori but is instead determined from data. The term non-parametric is not meant to imply that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance.
- A histogram is a simple nonparametric estimate of a probability distribution.
- Kernel density estimation is another method to estimate a probability distribution.
- Nonparametric regression and semiparametric regression methods have been developed based on kernels, splines, and wavelets.
- Data envelopment analysis provides efficiency coefficients similar to those obtained by multivariate analysis without any distributional assumption.
- KNNs classify the unseen instance based on the K points in the training set which are nearest to it.
- A support vector machine (with a Gaussian kernel) is a nonparametric large-margin classifier.
- The method of moments with polynomial probability distributions.
Methods
[edit]Non-parametric (or distribution-free) inferential statistical methods are mathematical procedures for statistical hypothesis testing which, unlike parametric statistics, make no assumptions about the probability distributions of the variables being assessed. The most frequently used tests include
- Analysis of similarities
- Anderson–Darling test: tests whether a sample is drawn from a given distribution
- Statistical bootstrap methods: estimates the accuracy/sampling distribution of a statistic
- Cochran's Q: tests whether k treatments in randomized block designs with 0/1 outcomes have identical effects
- Cohen's kappa: measures inter-rater agreement for categorical items
- Friedman two-way analysis of variance (Repeated Measures) by ranks: tests whether k treatments in randomized block designs have identical effects
- Empirical likelihood
- Kaplan–Meier: estimates the survival function from lifetime data, modeling censoring
- Kendall's tau: measures statistical dependence between two variables
- Kendall's W: a measure between 0 and 1 of inter-rater agreement.
- Kolmogorov–Smirnov test: tests whether a sample is drawn from a given distribution, or whether two samples are drawn from the same distribution.
- Kruskal–Wallis one-way analysis of variance by ranks: tests whether > 2 independent samples are drawn from the same distribution.
- Kuiper's test: tests whether a sample is drawn from a given distribution, sensitive to cyclic variations such as day of the week.
- Logrank test: compares survival distributions of two right-skewed, censored samples.
- Mann–Whitney U or Wilcoxon rank sum test: tests whether two samples are drawn from the same distribution, as compared to a given alternative hypothesis.
- McNemar's test: tests whether, in 2 × 2 contingency tables with a dichotomous trait and matched pairs of subjects, row and column marginal frequencies are equal.
- Median test: tests whether two samples are drawn from distributions with equal medians.
- Pitman's permutation test: a statistical significance test that yields exact p values by examining all possible rearrangements of labels.
- Rank products: detects differentially expressed genes in replicated microarray experiments.
- Siegel–Tukey test: tests for differences in scale between two groups.
- Sign test: tests whether matched pair samples are drawn from distributions with equal medians.
- Spearman's rank correlation coefficient: measures statistical dependence between two variables using a monotonic function.
- Squared ranks test: tests equality of variances in two or more samples.
- Tukey–Duckworth test: tests equality of two distributions by using ranks.
- Wald–Wolfowitz runs test: tests whether the elements of a sequence are mutually independent/random.
- Wilcoxon signed-rank test: tests whether matched pair samples are drawn from populations with different mean ranks.
- Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation.[4]
History
[edit]Early nonparametric statistics include the median (13th century or earlier, use in estimation by Edward Wright, 1599; see Median § History) and the sign test by John Arbuthnot (1710) in analyzing the human sex ratio at birth (see Sign test § History).[5][6]
See also
[edit]- CDF-based nonparametric confidence interval
- Parametric statistics
- Resampling (statistics)
- Semiparametric model
Notes
[edit]- ^ "All of Nonparametric Statistics". Springer Texts in Statistics. 2006. doi:10.1007/0-387-30623-4. ISBN 978-0-387-25145-5.
- ^ Pearce, J; Derrick, B (2019). "Preliminary testing: The devil of statistics?". Reinvention: An International Journal of Undergraduate Research. 12 (2). doi:10.31273/reinvention.v12i2.339.
- ^ Stuart A., Ord J.K, Arnold S. (1999), Kendall's Advanced Theory of Statistics: Volume 2A—Classical Inference and the Linear Model, sixth edition, §20.2–20.3 (Arnold).
- ^ Adikaram, K. K. L. B.; Hussein, M. A.; Effenberger, M.; Becker, T. (16 November 2015). "Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation". PLOS ONE. 10 (11): e0141486. Bibcode:2015PLoSO..1041486A. doi:10.1371/journal.pone.0141486. ISSN 1932-6203. PMC 4646355. PMID 26571035.
- ^ Conover, W.J. (1999), "Chapter 3.4: The Sign Test", Practical Nonparametric Statistics (Third ed.), Wiley, pp. 157–176, ISBN 0-471-16068-7
- ^ Sprent, P. (1989), Applied Nonparametric Statistical Methods (Second ed.), Chapman & Hall, ISBN 0-412-44980-3
General references
[edit]- Bagdonavicius, V., Kruopis, J., Nikulin, M.S. (2011). "Non-parametric tests for complete data", ISTE & WILEY: London & Hoboken. ISBN 978-1-84821-269-5.
- Corder, G. W.; Foreman, D. I. (2014). Nonparametric Statistics: A Step-by-Step Approach. Wiley. ISBN 978-1-118-84031-3.
- Gibbons, Jean Dickinson; Chakraborti, Subhabrata (2003). Nonparametric Statistical Inference, 4th Ed. CRC Press. ISBN 0-8247-4052-1.
- Hettmansperger, T. P.; McKean, J. W. (1998). Robust Nonparametric Statistical Methods. Kendall's Library of Statistics. Vol. 5. London: Edward Arnold. ISBN 0-340-54937-8. MR 1604954. also ISBN 0-471-19479-4.
- Hollander M., Wolfe D.A., Chicken E. (2014). Nonparametric Statistical Methods, John Wiley & Sons.
- Sheskin, David J. (2003) Handbook of Parametric and Nonparametric Statistical Procedures. CRC Press. ISBN 1-58488-440-1
- Wasserman, Larry (2007). All of Nonparametric Statistics, Springer. ISBN 0-387-25145-6.