Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleaning data prior to correspondence pattern analysis #33

Open
LinguList opened this issue Feb 16, 2023 · 1 comment
Open

Cleaning data prior to correspondence pattern analysis #33

LinguList opened this issue Feb 16, 2023 · 1 comment

Comments

@LinguList
Copy link
Contributor

We might need some basic checks whether a correspondence pattern analysis is useful, since I detected one pattern that causes huge problems:

    {'ID': [365, 371, 370, 367, 364, 369, 368, 366, 362],
 'taxa': ['Hachijo',
  'Hachijo',
  'Kagoshima',
  'Kochi',
  'Kyoto',
  'Oki',
  'Sado',
  'Shuri',
  'Tokyo'],
 'seqs': [['k', 'iː', '-', '-', '-', '-'],
  ['k', 'e', 'b', 'u', 'ɕ', 'o'],
  ['k', 'e', '-', '-', '-', 'i'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'eː', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'iː', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-']],
 'alignment': [['k', 'iː', '-', '-', '-', '-'],
  ['k', 'e', 'b', 'u', 'ɕ', 'o'],
  ['k', 'e', '-', '-', '-', 'i'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'eː', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'iː', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-']],
 'dataset': 'japonic',
 'seq_id': '449 ("hair")'}

Here, we have two words from Hachijo in the same cognate sets, but they differ (!). We can argue that for correspondence patterns, it is impossible for strictly cognate words to differ. So a preprocessing can in fact arbitrarily decide for one of them.

@LinguList
Copy link
Contributor Author

The pattern is difficult to detect. In CoPaR, only one word of the two is used, the other word is ignored, but this will shrink the alignment, since the one word causes all the gaps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant