Compare commits

...

No commits in common. "master" and "gh-pages" have entirely different histories.

93 changed files with 133 additions and 22381 deletions

View file

@ -1,20 +0,0 @@
[ignore]
.*/node_modules/.*
.*/src/serviceWorker\.js
.*/src/index\.js
.*\.test
[include]
[libs]
[lints]
[options]
; all=true
[strict]
[untyped]
.*\.scss
.*\.css

23
.gitignore vendored
View file

@ -1,23 +0,0 @@
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
# dependencies
/node_modules
/.pnp
.pnp.js
# testing
/coverage
# production
/build
# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

21
LICENSE
View file

@ -1,21 +0,0 @@
MIT License
Copyright (c) 2021 Sorrel
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

111
README.md
View file

@ -1,111 +0,0 @@
# Feature Change Applier
[Try the app!](https://sorrelbri.github.io/feature-change-applier/)
[Inspired by the Zompist Sound Change Applier 2](https://www.zompist.com/sca2.html)
## What is this?
Feature Change Applier is a tool for applying systemic sound change rules to an input lexicon.
Features:
- feature based phone definitions
- feature based sound change rule support
- multi-character phone support
- comparative runs for multiple rule sets
## What is LATL?
[Read the specification](/src/utils/latl/README.md)
LATL is a JavaScript targeting compiled language for doing linguistic analysis and transformations.
## How do I use FCA?
An FCA run requires the user to define three parameters:
- [the input lexicon](#The-Input-Lexicon), expressed in phonetic terms
- [the feature set](#the-feature-set), which maps each phonetic feature to positive or negative values for each phone
- [at least one 'epoch' of sound change rules](#epochs) to apply to the input lexicon
### The Input Lexicon
For best effect, the input lexicon should use a narrow phonetic transcription of each lexeme.
Lexemes must be separated by line breaks in order to be parsed properly by FCA.
Multi-word lexemes can be inserted with or without spaces, any white-space will be removed from the lexeme at runtime.
FCA does not currently support suprasegmentals by default, however features can be used to define prosodic information so long as it can be associated with a single phone.
For example:
- For tonemes, use IPA tone markers as in `ma˨˩˦` (马)
- For phonetic length, use IPA length markers as in `ħaːsin` (حَاسِن‎)
- For stress or syllabic breaks, however `ˈɡʊd.nɪs` may result in unpredictable behavior and is best avoided.
See below for defining these features in the feature set.
#### Future Changes to the Input Lexicon
- Future versions of FCA will allow for greater suprasegmental feature support.
- Future versions will allow for epoch specific lexemes
### The Feature Set
Phones in FCA are defined by the features they exhibit.
To add or edit a feature use the form to enter the feature name and the phones which are associated with the feature in the `+` and `-` inputs.
Phones should be separated by a forward slash and may be represented with multiple characters.
For example:
`aspirated + tʰ / pʰ / kʰ - t / p / k / ʔ`
Results in:
`[+ aspirated] = tʰ / pʰ / kʰ [- aspirated] = t / p / k / ʔ`
If the feature already exists, any phones associated with that feature will be replaced with the phones in the form.
A feature is not required to have a value for every phone, and every phone is not required to have a value for every feature.
Rules targeting `-` values for specific features will not target phones that are not defined in the feature set.
For example:
`[- aspirated]>ʔ/[+ vowel]_.`
This rule will not operate on the phone `ʊ` in `haʊs` as it was not defined with a negative `aspirated` value above.
#### Suprasegmentals
Toneme example using Mandarin tone system:
```
[+ tone] = ˥ / ˧˥ / ˨˩˦ / ˥˩ [- tone] =
[+ high] = ˥ / ˥˩ [- high] = ˧˥ / ˨˩˦
[+ low] = ˨˩˦ [- low] = ˥ / ˥˩ / ˧˥
[+ rising] = ˧˥ [- rising] = ˥ / ˨˩˦ / ˥˩
[+ falling] = ˨˩˦ / ˥˩ [- falling] = ˥ / ˧˥
```
Length example using Modern Standard Arabic (without allophonic variation):
```
[+ long] = aː / iː / uː [- long] = a / i / u / aj / aw
[+ geminate] = mː / nː / tː / tˤː / ... [- geminate] = m / n / t / tˤ / ...
```
#### Future Chagnes to the Feature Set
- Future versions of FCA will allow for greater suprasegmental feature support
- Future versions will allow for exclusive features. In the example below a phone cannot have a labial value and a coronal value simultaneously:
```
[!place
[labial
[+ labiodental] = f
[- labiodental] = p / m / kp / ŋm
[+ labiovelar] = kp / ŋm
[- labiovelar] = f / p / m
]
[coronal
[+ anterior] = t̪ / n̪ / t / n
[- anterior] = c / ɲ / ʈ / ɳ
[+ distributed] = t̪ / n̪ / c / ɲ
[- distributed] = t / n / ʈ / ɳ
]
...
]
```
### Epochs
This is where the rules to transform your lexicon are defined.
An FCA project can have as many 'epochs' or suites of sound change rules as you would like to define.
Rules can be defined using phones or features:
- `n>ŋ/._k`
- `[+ nasal alveolar]>[- alveolar + velar]/._[+ velar]`
These two rules will both act on the phone `n` in the sequence `nk` transforming it into `ŋ`, however the feature defined rule could also transform the `n` in the sequences `ng`, `nŋ`, `nx`, etc.
By default, FCA will pipe the initial lexicon into each one of these epochs and apply their transformations independently.
The output of one epoch can be piped into another epoch, however, by defining the `parent` parameter from the dropdown when adding a new epoch.

22
asset-manifest.json Normal file
View file

@ -0,0 +1,22 @@
{
"files": {
"main.css": "/feature-change-applier/static/css/main.3576d19b.chunk.css",
"main.js": "/feature-change-applier/static/js/main.ebcfda61.chunk.js",
"main.js.map": "/feature-change-applier/static/js/main.ebcfda61.chunk.js.map",
"runtime-main.js": "/feature-change-applier/static/js/runtime-main.7788e262.js",
"runtime-main.js.map": "/feature-change-applier/static/js/runtime-main.7788e262.js.map",
"static/js/2.7077a6de.chunk.js": "/feature-change-applier/static/js/2.7077a6de.chunk.js",
"static/js/2.7077a6de.chunk.js.map": "/feature-change-applier/static/js/2.7077a6de.chunk.js.map",
"index.html": "/feature-change-applier/index.html",
"precache-manifest.48fe5eb4f1cf3f337bf29c365ac35b5c.js": "/feature-change-applier/precache-manifest.48fe5eb4f1cf3f337bf29c365ac35b5c.js",
"service-worker.js": "/feature-change-applier/service-worker.js",
"static/css/main.3576d19b.chunk.css.map": "/feature-change-applier/static/css/main.3576d19b.chunk.css.map",
"static/js/2.7077a6de.chunk.js.LICENSE": "/feature-change-applier/static/js/2.7077a6de.chunk.js.LICENSE"
},
"entrypoints": [
"static/js/runtime-main.7788e262.js",
"static/js/2.7077a6de.chunk.js",
"static/css/main.3576d19b.chunk.css",
"static/js/main.ebcfda61.chunk.js"
]
}

View file

Before

Width:  |  Height:  |  Size: 318 B

After

Width:  |  Height:  |  Size: 318 B

View file

1
index.html Normal file
View file

@ -0,0 +1 @@
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="/feature-change-applier/favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><link rel="manifest" href="/feature-change-applier/manifest.json"/><link rel="stylesheet" href="/feature-change-applier/stylesheets/reset.css"><link href="https://fonts.googleapis.com/css?family=Catamaran|Fira+Code&display=swap" rel="stylesheet"><title>Feature Change Applier</title><link href="/feature-change-applier/static/css/main.3576d19b.chunk.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div><script>!function(p){function e(e){for(var r,t,n=e[0],o=e[1],u=e[2],a=0,l=[];a<n.length;a++)t=n[a],Object.prototype.hasOwnProperty.call(i,t)&&i[t]&&l.push(i[t][0]),i[t]=0;for(r in o)Object.prototype.hasOwnProperty.call(o,r)&&(p[r]=o[r]);for(s&&s(e);l.length;)l.shift()();return c.push.apply(c,u||[]),f()}function f(){for(var e,r=0;r<c.length;r++){for(var t=c[r],n=!0,o=1;o<t.length;o++){var u=t[o];0!==i[u]&&(n=!1)}n&&(c.splice(r--,1),e=a(a.s=t[0]))}return e}var t={},i={1:0},c=[];function a(e){if(t[e])return t[e].exports;var r=t[e]={i:e,l:!1,exports:{}};return p[e].call(r.exports,r,r.exports,a),r.l=!0,r.exports}a.m=p,a.c=t,a.d=function(e,r,t){a.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},a.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},a.t=function(r,e){if(1&e&&(r=a(r)),8&e)return r;if(4&e&&"object"==typeof r&&r&&r.__esModule)return r;var t=Object.create(null);if(a.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:r}),2&e&&"string"!=typeof r)for(var n in r)a.d(t,n,function(e){return r[e]}.bind(null,n));return t},a.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return a.d(r,"a",r),r},a.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},a.p="/feature-change-applier/";var r=this["webpackJsonpfeature-change-applier"]=this["webpackJsonpfeature-change-applier"]||[],n=r.push.bind(r);r.push=e,r=r.slice();for(var o=0;o<r.length;o++)e(r[o]);var s=n;f()}([])</script><script src="/feature-change-applier/static/js/2.7077a6de.chunk.js"></script><script src="/feature-change-applier/static/js/main.ebcfda61.chunk.js"></script></body></html>

15723
package-lock.json generated

File diff suppressed because it is too large Load diff

View file

@ -1,49 +0,0 @@
{
"name": "feature-change-applier",
"version": "0.1.0",
"private": true,
"homepage": "https://sorrelbri.github.io/feature-change-applier",
"dependencies": {
"flow-bin": "^0.113.0",
"gh-pages": "^2.2.0",
"local-storage": "^2.0.0",
"moo": "^0.5.1",
"nearley": "^2.19.1",
"node-sass": "^4.13.1",
"react": "^16.12.0",
"react-dom": "^16.12.0",
"react-router-dom": "^5.1.2",
"react-scripts": "^3.3.0"
},
"scripts": {
"start": "react-scripts start",
"compile-grammar": "nearleyc src/utils/latl/grammar.ne -o src/utils/latl/grammar.js",
"test-grammar": "nearley-test src/utils/latl/grammar.js --input",
"flow": "flow",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject",
"predeploy": "npm run build",
"deploy": "gh-pages -d build"
},
"eslintConfig": {
"extends": "react-app"
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"devDependencies": {
"@testing-library/jest-dom": "^4.2.4",
"@testing-library/react": "^9.3.2",
"react-test-renderer": "^16.12.0"
}
}

View file

@ -0,0 +1,26 @@
self.__precacheManifest = (self.__precacheManifest || []).concat([
{
"revision": "03dc3c02b8e82c291b72d17b881f91e9",
"url": "/feature-change-applier/index.html"
},
{
"revision": "c70b903d08c899aab0d6",
"url": "/feature-change-applier/static/css/main.3576d19b.chunk.css"
},
{
"revision": "343c2be8c03548f52986",
"url": "/feature-change-applier/static/js/2.7077a6de.chunk.js"
},
{
"revision": "d705cb622423d72c5defbf368ca70dcc",
"url": "/feature-change-applier/static/js/2.7077a6de.chunk.js.LICENSE"
},
{
"revision": "c70b903d08c899aab0d6",
"url": "/feature-change-applier/static/js/main.ebcfda61.chunk.js"
},
{
"revision": "3e2239a2ec4a190765bc",
"url": "/feature-change-applier/static/js/runtime-main.7788e262.js"
}
]);

Binary file not shown.

Before

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

Binary file not shown.

View file

@ -1,16 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
<link rel="stylesheet" href="%PUBLIC_URL%/stylesheets/reset.css">
<link href="https://fonts.googleapis.com/css?family=Catamaran|Fira+Code&display=swap" rel="stylesheet">
<title>Feature Change Applier</title>
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
</body>
</html>

View file

@ -1,88 +0,0 @@
set NASAL_PULMONIC_CONSONANTS = [ m̥, m, ɱ, n̼, n̥, n, ɳ̊, ɳ, ɲ̊, ɲ, ŋ, ̊ŋ, ɴ ],
STOP_PULMONIC_CONSONANTS = [ p, b, p̪, b̪, t̼, d̼, t, d, ʈ, ɖ, c, ɟ, k, ɡ, q, ɢ, ʡ, ʔ ],
S_FRICATIVE_PULMONIC_CONSONANTS = [ s, z, ʃ, ʒ, ʂ, ʐ, ɕ, ʑ ],
FRICATIVE_PULMONIC_CONSONANTS = [ ɸ, β, f, v, θ̼, ð̼, θ, ð, θ̠, ð̠, ɹ̠̊˔, ɹ̠˔, ɻ˔, ç, ʝ, x, ɣ, χ, ʁ, ħ, ʕ, h, ɦ ],
APPROXIMANT_PULMONIC_CONSONANTS = [ ʋ̥, ʋ, ɹ̥, ɹ, ɻ̊, ɻ, j̊, j, ɰ̊, ɰ, ʔ̞ ],
TAP_PULMONIC_CONSONANTS = [ ⱱ̟, ⱱ, ɾ̼, ɾ̥, ɾ, ɽ̊, ɽ, ɢ̆, ʡ̆ ],
TRILL_PULMONIC_CONSONANTS = [ ʙ̥, ʙ, r̥, r, ɽ̊r̥, ɽr, ʀ̥, ʀ, ʜ, ʢ ],
L_FRICATIVE_PULMONIC_CONSONANTS = [ ɬ, ɮ, ɭ̊˔, ɭ˔, ʎ̝̊, ʎ̝, ʟ̝̊, ʟ̝ ],
L_APPROXIMANT_PULMONIC_CONSONANTS = [ l̥, l, ɭ̊, ɭ, ʎ̥, ʎ, ʟ̥, ʟ, ʟ̠ ],
L_TAP_PULMONIC_CONSONANTS = [ ɺ, ɭ̆, ʎ̆, ʟ̆ ],
AFFRICATE_PULMONIC_CONSONANTS = [ pɸ, bβ, p̪f, b̪v, t̪θ, d̪ð, tɹ̝̊, dɹ̝, t̠ɹ̠̊˔, d̠ɹ̠˔, cç, ɟʝ, kx, ɡɣ, qχ, ʡʢ, ʔh ],
S_AFFRICATE_PULMONIC_CONSONANTS = [ ts, dz, t̠ʃ, d̠ʒ, ʈʂ, ɖʐ, tɕ, dʑ ],
L_AFFRICATE_PULMONIC_CONSONANTS = [ tɬ, dɮ, ʈɭ̊˔, cʎ̝̊, kʟ̝̊, ɡʟ̝ ],
DOUBLE_STOP_PULMONIC_CONSONANTS = [ t͡p, d͡b, k͡p, ɡ͡b, q͡ʡ ],
DOUBLE_NASAL_PULMONIC_CONSONANTS = [ n͡m, ŋ͡m ],
DOUBLE_FRICATIVE_PULMONIC_CONSONANTS = [ ɧ ],
DOUBLE_APPROXIMANT_PULMONIC_CONSONANTS = [ ʍ, w, ɥ̊, ɥ, ɫ ]
set PULMONIC_CONSONANTS, C = { NASAL_PULMONIC_CONSONANTS or STOP_PULMONIC_CONSONANTS
or S_FRICATIVE_PULMONIC_CONSONANTS or FRICATIVE_PULMONIC_CONSONANTS
or APPROXIMANT_PULMONIC_CONSONANTS or TAP_PULMONIC_CONSONANTS
or TRILL_PULMONIC_CONSONANTS or L_FRICATIVE_PULMONIC_CONSONANTS
or L_APPROXIMANT_PULMONIC_CONSONANTS or L_TAP_PULMONIC_CONSONANTS
or AFFRICATE_PULMONIC_CONSONANTS or S_AFFRICATE_PULMONIC_CONSONANTS
or L_AFFRICATE_PULMONIC_CONSONANTS or DOUBLE_STOP_PULMONIC_CONSONANTS
or DOUBLE_NASAL_PULMONIC_CONSONANTS or DOUBLE_FRICATIVE_PULMONIC_CONSONANTS
or DOUBLE_APPROXIMANT_PULMONIC_CONSONANTS
}
set STOP_EJECTIVE_CONSONANTS = [ pʼ, tʼ, ʈʼ, cʼ, kʼ, qʼ, ʡʼ ],
FRICATIVE_EJECTIVE_CONSONANTS = [ ɸʼ, fʼ, θʼ, sʼ, ʃʼ, ʂʼ, ɕʼ, xʼ, χʼ ],
L_FRICATIVE_EJECTIVE_CONSONANTS = [ ɬʼ ],
AFFRICATE_EJECTIVE_CONSONANTS = [ tsʼ, t̠ʃʼ, ʈʂʼ, kxʼ, qχʼ ],
L_AFFRICATE_EJECTIVE_CONSONANTS = [ tɬʼ, cʎ̝̊ʼ, kʟ̝̊ʼ ]
set EJECTIVE_CONSONANTS = { STOP_EJECTIVE_CONSONANTS or FRICATIVE_EJECTIVE_CONSONANTS
or L_FRICATIVE_EJECTIVE_CONSONANTS or AFFRICATE_EJECTIVE_CONSONANTS
or L_AFFRICATE_EJECTIVE_CONSONANTS
}
set TENUIS_CLICK_CONSONANTS = [ ʘ, ǀ, ǃ, ǂ ],
VOICED_CLICK_CONSONANTS = [ ʘ̬, ǀ̬, ǃ̬, ǂ̬ ],
NASAL_CLICK_CONSONANTS = [ ʘ̃, ǀ̃, ǃ̃, ǂ̃ ],
L_CLICK_CONSONANTS = [ ǁ, ǁ̬ ]
set CLICK_CONSONANTS = { TENUIS_CLICK_CONSONANTS or VOICED_CLICK_CONSONANTS
or NASAL_CLICK_CONSONANTS or L_CLICK_CONSONANTS
}
set IMPLOSIVE_CONSONANTS = [ ɓ, ɗ, ᶑ, ʄ, ɠ, ʛ, ɓ̥, ɗ̥, ᶑ̊, ʄ̊, ɠ̊, ʛ̥ ]
set NON_PULMONIC_CONSONANTS = { EJECTIVE_CONSONANTS or CLICK_CONSONANTS or IMPLOSIVE_CONSONANTS }
set CONSONANTS = { PULMONIC_CONSONANTS or NON_PULMONIC_CONSONANTS }
set MODAL_VOWELS = [ i, y, ɨ, ʉ, ɯ, u, ɪ, ʏ, ʊ, e, ø ɘ, ɵ ɤ, o, ø̞ ə, o̞, ɛ, œ ɜ, ɞ ʌ, ɔ, æ, ɐ, a, ɶ, ä, ɑ, ɒ ],
BREATHY_VOWELS = { [ V ] in MODAL_VOWELS yield [ V̤ ] },
VOICELESS_VOWELS = { [ V ] in MODAL_VOWELS yield [ V̥ ] },
CREAKY_VOWELS = { [ V ] in MODAL_VOWELS yield [ V̰ ] }
set SHORT_ORAL_VOWELS = { MODAL_VOWELS or BREATHY_VOWELS or CREAKY_VOWELS or VOICELESS_VOWELS },
LONG_ORAL_VOWELS = { [ V ] in SHORT_ORAL_VOWELS [ Vː ] },
ORAL_VOWELS = { SHORT_ORAL_VOWELS or LONG_ORAL_VOWELS }
set NASAL_VOWELS = { [ V ] in ORAL_VOWELS yield [ Ṽ ] },
SHORT_NASAL_VOWELS = { [ Vː ] in NASAL_VOWELS yield [ V ]ː },
LONG_NASAL_VOWELS = { [ Vː ] in NASAL_VOWELS }
set VOWELS = { ORAL_VOWELS or NASAL_VOWELS }
set PHONES = { VOWELS or CONSONANTS }
; print [ GLOBAL ]
[lateral
+=
L_AFFRICATE_EJECTIVE_CONSONANTS, L_AFFRICATE_PULMONIC_CONSONANTS, L_APPROXIMANT_PULMONIC_CONSONANTS,
L_CLICK_CONSONANTS, L_FRICATIVE_EJECTIVE_CONSONANTS, L_FRICATIVE_PULMONIC_CONSONANTS, L_TAP_PULMONIC_CONSONANTS
-=
{ not { [+ lateral ] in CONSONANTS } }, VOWELS
; alternative
; { not { [+ lateral ] in PHONES } }
]
*proto-lang
|child-lang

View file

@ -1,644 +0,0 @@
; -------- GA ENGLISH PHONETIC INVENTORY
; ---- VOWELS = æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟
; -- NASAL = æ̃ / ẽ / ə̃ / ɑ̃ / ɔ̃ / ɪ̃ / ɛ̃ / ʌ̃ / ʊ̃ / ĩ / ũ
; ɪ̞ / ʊ̞ = lowered
; u̟ = advanced
; -- LABIAL = u̟ / ʊ̞ / ɔ
; -- +HIGH = i / u̟ / ʊ̞ / ɪ̞
; -- -HIGH = ɑ / æ / e / ə / ɛ / ʌ
; -- +LOW = ɑ / æ / ɛ
; -- -LOW = i / u̟ / ʊ̞ / ɪ̞ / e / ə / ʌ
; -- +BACK = ɑ / ɔ / ʌ / ʊ̞ / u̟
; -- -BACK = æ / e / ə / ɪ̞ / ɛ / i
; -- +TENSE = e / i / u̟ / ɑ
; -- -TENSE = æ / ə / ɪ̞ / ɛ / ʌ / ʊ̞ / ɔ
; ---- DIPHTHONGS = eə / eɪ̯ / ju̟ / äɪ̞ / ɔɪ̞ / oʊ̞ / aʊ̞ / ɑɹ / iɹ / ɛɹ / ɔɹ / ʊɹ
; ---- CONSONANTS = p (pʰ) / b (b̥) / t (tʰ)(ɾ)(ʔ) / d (d̥)(ɾ) / tʃ / dʒ (d̥ʒ̊) / k (kʰ) / g (g̊) / f / v (v̥) / θ / ð (ð̥) /
; s / z (z̥) / ʃ / ʒ (ʒ̊) / h (ɦ)(ç) / m (ɱ)(m̩) / n(n̩) / ŋ / l (l̩)/ ɹ (ɹʲ ~ ɹˤ)(ɹ̩) / w (w̥) / j / x / ʔ
; -- PLOSIVES = p / p' / pʰ / t / t' / tʰ ɾ / k / k' / kʰ
; -- AFFRICATES = tʃ / dʒ
; -- FRICATIVES = f / v / θ / ð / s / z / ʃ / ʒ / ç / x
; -- NASAL OBSTRUENTS = m ɱ / n / ŋ
; -- LIQUIDS = l
; -- RHOTIC LIQUIDS = ɹ ɹʲ ɹˤ
; -- SYLLABIC CONSONANTS = m̩ / n̩ / l̩ / ɹ̩
; -- GLIDES = j / w
; -- LARYNGEALS = h ɦ / ʔ [- consonantal sonorant +/- LARYNGEAL FEATURES] only
; -------- distinctive groups
set PLOSIVES = [ p, pʰ, t, tʼ, tʰ, ɾ, kʼ, k, kʰ ]
AFFRICATES = [ tʃʰ, dʒ ]
FRICATIVES = [ f, v, θ, ð, s, z, ʃ, ʒ, ç, x ]
NASALS = [ m, ɱ, n, ŋ ]
LIQUIDS = [ l, ɹ, ɹʲ, ɹˤ ]
SYLLABICS = [ m̩, n̩, l̩, ɹ̩ ]
VOWELS = [ æ, e, ə, ɑ, ɔ, ɪ̞, ɛ, ʌ, ʊ̞, i, u̟ ]
GLIDES = [ j, w ]
LARYNGEALS = [ h, ɦ, ʔ ]
VOWELS = [ æ, e, ə, ɑ, ɔ, ɪ̞, ɛ, ʌ, ʊ̞, i, u̟ ]
; ---- implicit
; GLOBAL { all sets }
; ---- set join operations non-mutable!
; { SET_A not SET_B } left anti join
; { SET_A and SET_B } inner join
; { SET_A or SET_B } full outer join
; { not SET_A } = { GLOBAL not SET_A }
; ---- unnecessary sugar
; { not SET_A nor SET_B } = { GLOBAL not { SET_A or SET_B } }
; ---- set character operations - non-mutable!
; { [ Xy ] in SET_A } FILTER: where X is any character and y is a filtering character
; { SET_A yield [ Xy ] } CONCATENATE: performs transformation with (prepended or) appended character
; { SET_A yield [ X concat y ] }
; { SET_A yield [ y concat X ] }
; { SET_A yield y[ X ] } DISSOCIATE: performs transformation removing prepended (or appended) character
; { SET_A yield y dissoc [ X ] }
; { SET_A yield [ X ] dissoc y }
; { [ Xy ] in SET_A yield [ X ]y } combined FILTER and DISSOCIATE
; ---- TENTATIVE!
; ---- set feature operations - non-mutable!
; { [ + feature1 - feature2 ] in SET_A } FILTER: where feature1 and feature2 are filtering features
; { SET_A yield [ X + feature1 ] } TRANSFORMATION: performs transformation with (prepended or) appended character
; { SET_A yield [ X - feature1 ] }
; { SET_A yield [ X - feature1 + feature2 ] }
; { [ X + feature1 - feature2 ] in SET_A yield [ - feature1 + feature2 ] } combined FILTER and TRANSFORMATION
; ---- MAPPING
set PLOSIVES = [ p, t, k ],
FRICATIVES = [ f, s, x ],
; pairs PLOSIVES with FRICATIVES that have matching features = [ pf, ts, kx ]
AFFRICATES = { PLOSIVES yield [ X concat { [ [ X ] - fricative ] in FRICATIVES } ] }
; ---- example with join, character, and feature operations
; set SET_C = { [ PHONE +feature1 ] in { SET_A or SET_B } yield [ PHONE concat y ] }
; -------- main class features
[consonantal
+=
PLOSIVES, AFFRICATES, FRICATIVES, NASALS, LIQUIDS, SYLLABICS
-=
VOWELS, GLIDES, LARYNGEALS
]
[sonorant
+=
VOWELS, GLIDES, LIQUIDS, NASALS, SYLLABICS
-=
PLOSIVES, AFFRICATES, FRICATIVES, LARYNGEALS
]
[approximant
+=
VOWELS, LIQUIDS, GLIDES,
; SYLLABIC LIQUIDS
l̩, ɹ̩
-=
PLOSIVES, AFFRICATES, FRICATIVES, NASALS,
; SYLLABIC NASALS
m̩, n̩
]
; -------- laryngeal features
[voice
+=
VOWELS, GLIDES, LIQUIDS, NASALS, SYLLABICS,
; VOICED FRICATIVES
v, ð, z, ʒ,
; VOICED AFFRICATES
dʒ,
; VOICED LARYNGEALS
ɦ
-=
PLOSIVES,
; VOICELESS AFFRICATES
tʃ,
; VOICELESS FRICATIVES
f, θ, s, ʃ, ç, x,
; VOICELESS LARYNGEALS
h, ʔ
]
[spreadGlottis
+=
; ASPIRATED PLOSIVES
pʰ, tʰ, kʰ,
; ASPIRATED AFFRICATES
; SPREAD LARYNGEALS
h ɦ
-=
VOWELS, FRICATIVES, NASALS, LIQUIDS, SYLLABICS, GLIDES,
; UNASPIRATED PLOSIVES
p, pʼ, t, tʼ, ɾ, k, kʼ,
; UNASPIRATED AFFRICATES
tʃ, dʒ,
; CONSTRICTED LARYNGEALS
ʔ
]
[constrictedGlottis
+=
; LARYNGEALIZED RHOTIC
ɹˤ,
; CONSTRICTED LARYNGEAL
ʔ,
; EJECTIVE PLOSIVES
pʼ, tʼ, kʼ
-=
VOWELS, AFFRICATES, FRICATIVES, NASALS, SYLLABICS, GLIDES,
; UNCONSTRICTED PLOSIVES
{ PLOSIVES not [ p', t', k' ] },
; NON-CONSTRICTED LIQUIDS
l, ɹ ɹʲ,
; SPREAD LARYNGEALS
h ɦ,
]
; -------- manner features
[continuant
+=
; FRICATIVES
f, v, θ, ð, s, z, ʃ, ʒ, ç, x,
; VOWELS
æ, e, ə, ɑ, ɔ, ɪ̞, ɛ, ʌ, ʊ̞, i, u̟, æ̃, ẽ, ə̃, ɑ̃, ɔ̃, ɪ̃, ɛ̃, ʌ̃, ʊ̃, ĩ, ũ
; LIQUIDS + RHOTICS
l, ɹ ɹʲ ɹˤ,
; GLIDES
j, w,
; SYLLABIC LIQUIDS
l̩, ɹ̩,
; TAPS
ɾ
-=
; NON-TAP PLOSIVES
p, pʼ, pʰ, t, tʼ, tʰ, k, kʼ, kʰ,
; AFFRICATES
tʃ, dʒ,
; NASALS
m ɱ, n, ŋ,
; SYLLABIC NASALS
m̩, n̩
]
[nasal
+=
; NASALS
m ɱ, n, ŋ,
; SYLLABIC NASALS
m̩, n̩
-=
; VOWELS
æ, e, ə, ɑ, ɔ, ɪ̞, ɛ, ʌ, ʊ̞, i, u̟, æ̃, ẽ, ə̃, ɑ̃, ɔ̃, ɪ̃, ɛ̃, ʌ̃, ʊ̃, ĩ, ũ
; FRICATIVES
f, v, θ, ð, s, z, ʃ, ʒ, ç, x,
; LIQUIDS + RHOTICS
l, ɹ ɹʲ ɹˤ,
; GLIDES
j, w,
; SYLLABIC LIQUIDS
l̩, ɹ̩,
; PLOSIVES
p, pʼ, pʰ, t, tʼ, tʰ ɾ, k, kʼ, kʰ,
; AFFRICATES
tʃ, dʒ,
]
[strident
+=
; STRIDENT FRICATIVES
f, v, s, z, ʃ, ʒ,
; STRIDENT AFFRICATES
tʃ, dʒ
-=
; VOWELS
æ̃, ẽ, ə̃, ɑ̃, ɔ̃, ɪ̃, ɛ̃, ʌ̃, ʊ̃, ĩ, ũ
; PLOSIVES
p, pʼ, pʰ, t, tʼ, tʰ ɾ, k, kʼ, kʰ,
; NON-STRIDENT FRICATIVES
θ, ð, ç, x,
; NASAL OBSTRUENTS
m ɱ, n, ŋ,
; RHOTICS + LIQUIDS
l, ɹ ɹʲ ɹˤ,
; SYLLABIC CONSONANTS
m̩, n̩, l̩, ɹ̩,
; GLIDES
j, w
]
[lateral
+=
; LATERAL LIQUIDS
l,
; SYLLABIC LATERALS,
-=
; VOWELS
æ, e, ə, ɑ, ɔ, ɪ̞, ɛ, ʌ, ʊ̞, i, u̟, æ̃, ẽ, ə̃, ɑ̃, ɔ̃, ɪ̃, ɛ̃, ʌ̃, ʊ̃, ĩ, ũ
; PLOSIVES
p, pʼ, pʰ, t, tʼ, tʰ ɾ, k, kʼ, kʰ
; AFFRICATES
tʃ, dʒ
; FRICATIVES
f, v, θ, ð, s, z, ʃ, ʒ, ç, x
; NASAL OBSTRUENTS
m ɱ, n, ŋ
; RHOTIC LIQUIDS
ɹ ɹʲ ɹˤ
; NON-LIQUID SYLLABIC CONSONANTS
m̩, n̩, ɹ̩
; GLIDES
j, w
]
; -------- ---- PLACE features
; -------- labial features
[labial
+=
; ROUNDED VOWELS
u̟, ʊ̞, ɔ, ʊ̃, ũ, ɔ̃
; LABIAL PLOSIVES
p, pʼ, pʰ,
; LABIAL FRICATIVES
f, v,
; LABIAL NASALS
m ɱ,
; LABIAL SYLLABIC CONSONANTS
m̩,
; LABIAL GLIDES
w
-=
; UNROUNDED VOWELS
æ, e, ə, ɑ, ɪ̞, ɛ, ʌ, i, æ̃, ẽ, ə̃, ɑ̃, ɪ̃, ɛ̃, ʌ̃, ĩ,
; NON-LABIAL PLOSIVES
t, tʼ, tʰ ɾ, k, kʼ, kʰ,
; NON-LABIAL AFFRICATES
tʃ, dʒ,
; NON-LABIAL FRICATIVES
θ, ð, s, z, ʃ, ʒ, ç, x,
; NON-LABIAL NASAL OBSTRUENTS
n, ŋ,
; LIQUIDS
l,
; RHOTIC LIQUIDS
ɹ ɹʲ ɹˤ,
; NON-LABIAL SYLLABIC CONSONANTS
n̩, l̩, ɹ̩,
; NON-LABIAL GLIDES
j
]
; -------- coronal features
[coronal
+=
; CORONAL PLOSIVES
t, tʼ, tʰ ɾ,
; CORONAL AFFRICATES
tʃ, dʒ,
; CORONAL FRICATIVES
θ, ð, s, z, ʃ, ʒ,
; CORONAL NASALS
n,
; CORONAL LIQUIDS
l
; CORONAL RHOTIC LIQUIDS
ɹ
; CORONAL SYLLABIC CONSONANTS
n̩, l̩, ɹ̩
-=
; VOWELS
æ, e, ə, ɑ, ɔ, ɪ̞, ɛ, ʌ, ʊ̞, i, u̟, æ̃, ẽ, ə̃, ɑ̃, ɔ̃, ɪ̃, ɛ̃, ʌ̃, ʊ̃, ĩ, ũ
; NON-CORONAL PLOSIVES
p, pʼ, pʰ, k, kʼ, kʰ
; NON-CORONAL FRICATIVES
f, v, ç, x
; NON-CORONAL NASAL OBSTRUENTS
m ɱ, ŋ
; NON-CORONAL RHOTIC LIQUIDS
ɹʲ ɹˤ
; NON-CORONAL SYLLABIC CONSONANTS
m̩,
; NON-CORONAL GLIDES
j, w
]
[anterior
+=
; ALVEOLAR PLOSIVES
t, tʼ, tʰ ɾ,
; ALVEOLAR AFFRICATES
tʃ, dʒ,
; DENTAL FRICATIVES
θ, ð,
; ALVEOLAR FRICATIVES
s, z,
; ALVEOLAR NASALS
n,
; ALVEOLAR LIQUIDS
l
; ALVEOLAR SYLLABIC CONSONANTS
n̩, l̩,
-=
; POSTALVEOLAR FRICATIVES
ʃ, ʒ,
; POSTALVEOLAR RHOTIC LIQUIDS
ɹ,
; POSTALVEOLAR SYLLABIC CONSONANTS
ɹ̩,
; -- NON-CORONALs
; VOWELS
æ, e, ə, ɑ, ɔ, ɪ̞, ɛ, ʌ, ʊ̞, i, u̟, æ̃, ẽ, ə̃, ɑ̃, ɔ̃, ɪ̃, ɛ̃, ʌ̃, ʊ̃, ĩ, ũ
; NON-CORONAL PLOSIVES
p, pʼ, pʰ, k, kʼ, kʰ
; NON-CORONAL FRICATIVES
f, v, ç, x
; NON-CORONAL NASAL OBSTRUENTS
m ɱ, ŋ
; NON-CORONAL RHOTIC LIQUIDS
ɹʲ ɹˤ
; NON-CORONAL SYLLABIC CONSONANTS
m̩,
; NON-CORONAL GLIDES
j, w
]
[distributed
+=
; DENTAL FRICATIVES
θ, ð,
; POSTALVEOLAR FRICATIVES
ʃ, ʒ,
; POSTALVEOLAR RHOTIC LIQUIDS
ɹ,
; POSTALVEOLAR SYLLABIC CONSONANTS
ɹ̩,
-=
; apical, retroflex
; ALVEOLAR PLOSIVES
t, tʼ, tʰ ɾ,
; ALVEOLAR FRICATIVES
s, z,
; ALVEOLAR NASALS
n,
; ALVEOLAR LIQUIDS
l
; ALVEOLAR SYLLABIC CONSONANTS
n̩, l̩,
; -- NON-CORONALS
; VOWELS
æ, e, ə, ɑ, ɔ, ɪ̞, ɛ, ʌ, ʊ̞, i, u̟, æ̃, ẽ, ə̃, ɑ̃, ɔ̃, ɪ̃, ɛ̃, ʌ̃, ʊ̃, ĩ, ũ
; NON-CORONAL PLOSIVES
p, pʼ, pʰ, k, kʼ, kʰ
; NON-CORONAL FRICATIVES
f, v, ç, x
; NON-CORONAL NASAL OBSTRUENTS
m ɱ, ŋ
; NON-CORONAL RHOTIC LIQUIDS
ɹʲ ɹˤ
; NON-CORONAL SYLLABIC CONSONANTS
m̩,
; NON-CORONAL GLIDES
j, w
]
; -------- dorsal features
[dorsal
+=
; VOWELS
æ, e, ə, ɑ, ɔ, ɪ̞, ɛ, ʌ, ʊ̞, i, u̟, æ̃, ẽ, ə̃, ɑ̃, ɔ̃, ɪ̃, ɛ̃, ʌ̃, ʊ̃, ĩ, ũ
; DORSAL PLOSIVES
k, kʼ, kʰ,
; DORSAL FRICATIVES
ç, x,
; DORSAL NASAL OBSTRUENTS
ŋ,
; DORSAL RHOTIC LIQUIDS
ɹʲ ɹˤ
; DORSAL GLIDES
j
-=
; NON-DORSAL PLOSIVES
p, pʼ, pʰ, t, tʼ, tʰ ɾ,
; NON-DORSAL AFFRICATES
tʃ, dʒ,
; NON-DORSAL FRICATIVES
f, v, θ, ð, s, z, ʃ, ʒ,
; NON-DORSAL NASALS
m ɱ, n,
; NON-DORSAL LIQUIDS
l
; NON-DORSAL RHOTIC LIQUIDS
ɹ
; NON-DORSAL SYLLABIC CONSONANTS
m̩, n̩, l̩, ɹ̩
; NON-DORSAL GLIDES
w
]
[high
+=
; HIGH VOWELS
i, u̟, ʊ̞, ɪ̞, ĩ, ũ, ʊ̃, ɪ̃
; HIGH DORSAL PLOSIVES
k, kʼ, kʰ,
; HIGH DORSAL FRICATIVES
ç, x,
; HIGH DORSAL NASAL OBSTRUENTS
ŋ,
; HIGH RHOTIC LIQUIDS
ɹʲ
; HIGH DORSAL GLIDES
j, w
-= χ, e, o, a
; NON-HIGH VOWELS
ɑ, æ, e, ə, ɛ, ʌ, æ̃, ẽ, ə̃, ɑ̃, ɔ̃, ɛ̃, ʌ̃,
; NON-HIGH RHOTIC LIQUIDS
ɹˤ
; -- NON-DORSALS
; NON-DORSAL PLOSIVES
p, pʼ, pʰ, t, tʼ, tʰ ɾ,
; NON-DORSAL AFFRICATES
tʃ, dʒ,
; NON-DORSAL FRICATIVES
f, v, θ, ð, s, z, ʃ, ʒ,
; NON-DORSAL NASALS
m ɱ, n,
; NON-DORSAL LIQUIDS
l
; NON-DORSAL RHOTIC LIQUIDS
ɹ
; NON-DORSAL SYLLABIC CONSONANTS
m̩, n̩, l̩, ɹ̩
; NON-DORSAL GLIDES
w
]
[low
+=
; LOW VOWELS
ɑ, æ, ɛ, æ̃, ɑ̃, ɛ̃,
; LOW DORSAL RHOTIC LIQUIDS
ɹˤ
-= a, ɛ, ɔ
; NON-LOW VOWELS
i, u̟, ʊ̞, ɪ̞, e, ə, ʌ, ẽ, ə̃, ɔ̃, ɪ̃, ʌ̃, ʊ̃, ĩ, ũ
; NON-LOW DORSAL PLOSIVES
k, kʼ, kʰ,
; NON-LOW DORSAL FRICATIVES
ç, x,
; NON-LOW DORSAL NASAL OBSTRUENTS
ŋ,
; NON-LOW DORSAL RHOTIC LIQUIDS
ɹʲ
; DORSAL GLIDES
j
; -- NON-DORSALS
; NON-DORSAL PLOSIVES
p, pʼ, pʰ, t, tʼ, tʰ ɾ,
; NON-DORSAL AFFRICATES
tʃ, dʒ,
; NON-DORSAL FRICATIVES
f, v, θ, ð, s, z, ʃ, ʒ,
; NON-DORSAL NASALS
m ɱ, n,
; NON-DORSAL LIQUIDS
l
; NON-DORSAL RHOTIC LIQUIDS
ɹ
; NON-DORSAL SYLLABIC CONSONANTS
m̩, n̩, l̩, ɹ̩
; NON-DORSAL GLIDES
w
]
[back
+=
; k, kʼ, ɣ, χ, u, ə, o, ʌ, ɑ
; BACK VOWELS
ɑ, ɔ, ʌ, ʊ̞, u̟, ɑ̃, ɔ̃, ʌ̃, ʊ̃, ũ,
; BACK DORSAL PLOSIVES
k, kʼ, kʰ,
; BACK DORSAL FRICATIVES
x,
; BACK DORSAL NASAL OBSTRUENTS
ŋ,
; BACK DORSAL RHOTIC LIQUIDS
ɹˤ
-= ç, k̟, i, y, ø, ɛ
; NON-BACK DORSAL FRICATIVES
ç,
; NON-BACK DORSAL RHOTIC LIQUIDS
ɹʲ
; NON-BACK DORSAL GLIDES
j
; NON-BACK VOWELS
æ, e, ə, ɪ̞, ɛ, i, æ̃, ẽ, ə̃, ɪ̃, ɛ̃, ĩ
; -- NON-DORSALS
; NON-DORSAL PLOSIVES
p, pʼ, pʰ, t, tʼ, tʰ ɾ,
; NON-DORSAL AFFRICATES
tʃ, dʒ,
; NON-DORSAL FRICATIVES
f, v, θ, ð, s, z, ʃ, ʒ,
; NON-DORSAL NASALS
m ɱ, n,
; NON-DORSAL LIQUIDS
l
; NON-DORSAL RHOTIC LIQUIDS
ɹ
; NON-DORSAL SYLLABIC CONSONANTS
m̩, n̩, l̩, ɹ̩
; NON-DORSAL GLIDES
w
]
[tense ; compare to ATR or RTR
+=
; TENSE VOWELS
e, i, u̟, ɑ, ĩ, ũ, ẽ, ɑ̃,
-=
; NON-TENSE VOWELS
æ, ə, ɪ̞, ɛ, ʌ, ʊ̞, ɔ, æ̃, ə̃, ɔ̃, ɪ̃, ɛ̃, ʌ̃, ʊ̃,
; DORSAL PLOSIVES
k, kʼ, kʰ,
; DORSAL FRICATIVES
ç, x,
; DORSAL NASAL OBSTRUENTS
ŋ,
; DORSAL RHOTIC LIQUIDS
ɹʲ ɹˤ,
; DORSAL GLIDES
j
; -- NON-DORSALS
; NON-DORSAL PLOSIVES
p, pʼ, pʰ, t, tʼ, tʰ ɾ,
; NON-DORSAL AFFRICATES
tʃ, dʒ,
; NON-DORSAL FRICATIVES
f, v, θ, ð, s, z, ʃ, ʒ,
; NON-DORSAL NASALS
m ɱ, n,
; NON-DORSAL LIQUIDS
l
; NON-DORSAL RHOTIC LIQUIDS
ɹ
; NON-DORSAL SYLLABIC CONSONANTS
m̩, n̩, l̩, ɹ̩
; NON-DORSAL GLIDES
w
]
*PROTO
|Gif Lang
*PROTO
|Jif Lang
; -- Devoicing, all our z's become s's
[ + voice consonantal - nasal]>[- voice]/._.
; -- loss of schwa, the is th'
ə>0/._.
; -- Ejectivization, all our pits become pit's
[+ spreadGlottis - continuant]>[+ constrictedGlottis - spreadGlottis]/._[+ constrictedGlottis]
[+ spreadGlottis - continuant]>[+ constrictedGlottis - spreadGlottis]/[+ constrictedGlottis]_.
[+ constrictedGlottis]>0/[+ constrictedGlottis - continuant]_.
[+ constrictedGlottis]>0/._[+ constrictedGlottis - continuant]
; -- r color spreading, all our reports become rihpahts
[- consonantal tense]>[+ tense]/ɹ_.
[- consonantal tense]>[+ tense]/._ɹ
[- consonantal high]>[+ high]/ɹʲ_.
[- consonantal high]>[+ high]/._ɹʲ
[- consonantal back]>[+ back]/ɹˤ_.
[- consonantal back]>[+ back]/._ɹˤ
ɹ>0/._.
ɹʲ>0/._.
ɹˤ>0/._.
; -- Deaspiration, tiff is diff and diff is tiff
[+ spreadGlottis - continuant]>[- spreadGlottis]/._.
; "JavaScript"
; "gif or jif? I say zhaif"
; "This request returns an empty object"
; "I love going to waffle js!"
; "A donut a day makes living with the threat of pandemic easier"

39
service-worker.js Normal file
View file

@ -0,0 +1,39 @@
/**
* Welcome to your Workbox-powered service worker!
*
* You'll need to register this file in your web app and you should
* disable HTTP caching for this file too.
* See https://goo.gl/nhQhGp
*
* The rest of the code is auto-generated. Please don't update this file
* directly; instead, make changes to your Workbox build configuration
* and re-run your build process.
* See https://goo.gl/2aRDsh
*/
importScripts("https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-sw.js");
importScripts(
"/feature-change-applier/precache-manifest.48fe5eb4f1cf3f337bf29c365ac35b5c.js"
);
self.addEventListener('message', (event) => {
if (event.data && event.data.type === 'SKIP_WAITING') {
self.skipWaiting();
}
});
workbox.core.clientsClaim();
/**
* The workboxSW.precacheAndRoute() method efficiently caches and responds to
* requests for URLs in the manifest.
* See https://goo.gl/S9QRab
*/
self.__precacheManifest = [].concat(self.__precacheManifest || []);
workbox.precaching.precacheAndRoute(self.__precacheManifest, {});
workbox.routing.registerNavigationRoute(workbox.precaching.getCacheKeyForURL("/feature-change-applier/index.html"), {
blacklist: [/^\/_/,/\/[^\/?]+\.[^\/]+$/],
});

View file

@ -1,20 +0,0 @@
.App {
text-align: center;
}
.App-logo {
height: 40vmin;
}
.App-header {
min-height: 100vh;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
font-size: calc(10px + 2vmin);
}
.App-link {
color: #09d3ac;
}

View file

@ -1,14 +0,0 @@
import React from 'react';
import './App.css';
import PhonoChangeApplier from './PhonoChangeApplier';
function App() {
return (
<div className="App" data-testid="App">
<h1 data-testid="App-name">Feature Change Applier</h1>
<PhonoChangeApplier />
</div>
);
}
export default App;

View file

@ -1,21 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import { HashRouter as Router } from 'react-router-dom';
import App from './App';
import renderer from 'react-test-renderer';
import { exportAllDeclaration } from '@babel/types';
import {render} from '@testing-library/react';
import extendExpect from '@testing-library/jest-dom/extend-expect'
it('renders App without crashing', () => {
const div = document.createElement('div');
ReactDOM.render(<Router><App /></Router>, div);
ReactDOM.unmountComponentAtNode(div);
});
describe('App', () => {
it('renders the correct title', () => {
const { getByTestId } = render(<Router><App /></Router>);
expect(getByTestId('App-name')).toHaveTextContent('Feature Change Applier');
})
})

View file

@ -1,52 +0,0 @@
import React, { useState, useReducer } from 'react';
import { Link, Route } from 'react-router-dom';
import './PhonoChangeApplier.scss';
import ProtoLang from './components/ProtoLang';
import Features from './components/Features';
import Epochs from './components/Epochs';
import Options from './components/Options';
import Output from './components/Output';
import Latl from './components/Latl';
import LatlOutput from './components/LatlOutput';
import { stateReducer } from './reducers/reducer';
import { clearState, waffleState } from './reducers/reducer.init';
const PhonoChangeApplier = () => {
const [ state, dispatch ] = useReducer(
stateReducer,
{},
waffleState
)
const { lexicon, phones, phonemes, epochs, options, features, results, errors, latl, parseResults } = state;
return (
<>
<Route exact path="/latl">
<Link to="/">Back to GUI</Link>
<div className="PhonoChangeApplier PhonoChangeApplier--latl">
<Latl latl={latl} dispatch={dispatch}/>
<LatlOutput results={results} options={options} parseResults={parseResults} errors={errors} dispatch={dispatch}/>
</div>
</Route>
<Route exact path="/">
<Link to="/latl">LATL</Link>
<div className="PhonoChangeApplier PhonoChangeApplier--gui" data-testid="PhonoChangeApplier">
<ProtoLang lexicon={lexicon} dispatch={dispatch}/>
<Features phones={phones} features={features} dispatch={dispatch}/>
<Epochs epochs={epochs} errors={errors} dispatch={dispatch} />
<Options options={options} dispatch={dispatch}/>
<Output results={results} options={options} dispatch={dispatch}/>
</div>
</Route>
</>
);
}
export default PhonoChangeApplier;

View file

@ -1,67 +0,0 @@
@import '../public/stylesheets/variables';
div.App {
max-height: 100vh;
max-width: 100vw;
line-height: 1.25em;
padding: 1em;
a {
color: map-get($colors, 'text-input')
}
h1 {
font-size: 2em;
padding: 1em 0;
}
h3 {
font-size: 1.25em;
padding: 0.5em 0;
}
h5 {
font-size: 1.1em;
padding: 0.1em 0;
font-weight: 800;
}
div.PhonoChangeApplier--gui {
display: grid;
width: 100%;
place-items: center center;
grid-template-columns: repeat(auto-fit, minmax(25em, 1fr));
grid-template-rows: repeat(auto-fill, minmax(300px, 1fr));
div {
max-width: 100%;
max-height: 50vh;
margin: 1em;
overflow-y: scroll;
}
}
div.PhonoChangeApplier--latl {
display: flex;
flex-flow: row wrap;
}
button.form, input[type="submit"].form, input[type="button"].form {
height: 2em;
border-radius: 0.25em;
border-color: transparent;
margin: 0.2em auto;
width: 10em;
}
button.form--add, input[type="submit"].form--add, input[type="button"].form--add{
background-color: greenyellow;
color: black;
}
button.form--remove, input[type="submit"].form--remove, input[type="button"].form--remove {
background-color: red;
color: white;
}
}

View file

@ -1,22 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import { HashRouter as Router } from 'react-router-dom';
import App from './App';
import PhonoChangeApplier from './PhonoChangeApplier';
import renderer from 'react-test-renderer';
import { exportAllDeclaration } from '@babel/types';
import {render} from '@testing-library/react';
import extendExpect from '@testing-library/jest-dom/extend-expect'
it('renders PhonoChangeApplier without crashing', () => {
const div = document.createElement('div');
ReactDOM.render(<Router><PhonoChangeApplier /></Router>, div);
ReactDOM.unmountComponentAtNode(div);
});
describe('App', () => {
it('renders Proto Language Lexicon', () => {
const { getByTestId } = render(<Router><PhonoChangeApplier /></Router>);
expect(getByTestId('PhonoChangeApplier')).toHaveTextContent('Proto Language Lexicon');
})
})

View file

@ -1,79 +0,0 @@
import React from 'react';
import './Epochs.scss';
import SoundChangeSuite from './SoundChangeSuite';
import { render } from 'react-dom';
const Epochs = ({epochs, errors, dispatch}) => {
const addEpoch = e => {
e.preventDefault()
let index = epochs.length + 1;
dispatch({
type: 'ADD_EPOCH',
value: {name: `epoch ${index}`}
})
}
const removeEpoch = (e, epochName) => {
e.preventDefault()
dispatch({
type: 'REMOVE_EPOCH',
value: {name: epochName}
});
}
const updateEpoch = (epoch, epochIndex) => {
const dispatchValue = {
name: epoch.name,
index: epochIndex,
changes: epoch.changes,
parent: epoch.parent
}
dispatch({
type: "SET_EPOCH",
value: dispatchValue
})
}
const renderAddEpochButton = index => {
if (epochs && index === epochs.length - 1 ) return (
<form onSubmit={e=>addEpoch(e)}>
<input className="form form--add" type="submit" name="add-epoch" value="Add Epoch" ></input>
</form>
)
return <></>
}
const renderEpochs = () => {
if (epochs && epochs.length) {
return epochs.map((epoch, index) => {
const epochError = errors.epoch ? errors.error : null
return (
<div
className="SoundChangeSuite"
data-testid={`${epoch.name}_SoundChangeSuite`}
key={`epoch-${index}`}
>
<SoundChangeSuite
epochIndex={index} epoch={epoch}
updateEpoch={updateEpoch} removeEpoch={removeEpoch}
epochs={epochs}
error={epochError}
/>
{renderAddEpochButton(index)}
</div>
)});
}
return renderAddEpochButton(-1)
}
return (
<>
{ renderEpochs() }
</>
);
}
export default Epochs;

View file

@ -1,21 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import Epochs from './Epochs';
import renderer from 'react-test-renderer';
import { exportAllDeclaration } from '@babel/types';
import {render, fireEvent} from '@testing-library/react';
import extendExpect from '@testing-library/jest-dom/extend-expect'
it('renders Epochs without crashing', () => {
const div = document.createElement('div');
ReactDOM.render(<Epochs />, div);
ReactDOM.unmountComponentAtNode(div);
});
describe('Epochs', () => {
it('renders a suite of soundchanges', () => {
const { getByTestId } = render(<Epochs />);
})
});

View file

@ -1,139 +0,0 @@
// @flow
import React, {useState} from 'react';
import './Features.scss';
import type { featureAction } from '../reducers/reducer.features';
const Features = ({ phones, features, dispatch }) => {
const [feature, setFeature] = useState('aspirated')
const [ newPositivePhones, setNewPositivePhones ] = useState('tʰ / pʰ / kʰ');
const [ newNegativePhones, setNewNegativePhones ] = useState('t / p / k');
const newFeaturesSubmit = e => {
e.preventDefault();
setFeature('');
setNewPositivePhones('');
setNewNegativePhones('');
}
const handleDeleteClick = (e, feature) => {
e.preventDefault();
const deleteFeatureAction = {
type: "DELETE_FEATURE",
value: feature
}
return dispatch(deleteFeatureAction);
}
const parsePhonesFromFeatureObject = featureObject => {
const getProperty = property => object => object[property]
const getFeatureMap = (featureObject) => {
return Object.keys(featureObject).map(feature => {
const plusPhones = featureObject[feature].positive.map(getProperty('grapheme')).join(' / ');
const minusPhones = featureObject[feature].negative.map(getProperty('grapheme')).join(' / ');
return {[feature]: {plus: plusPhones, minus: minusPhones}}
})
}
const getFeatureMapJSX = (featureMap) => {
return featureMap.map((feature, index) => {
const [featureName] = Object.keys(feature);
const { plus, minus } = feature[featureName];
return (
<li key={`feature__${featureName}`}>
<span className="feature--names-and-phones">
<span className="feature--feature-name">
{`[+ ${featureName} ]`}
</span>
<span className="feature--feature-phones">
{plus}
</span>
</span>
<span className="feature--names-and-phones">
<span className="feature--feature-name">
{`[- ${featureName} ]`}
</span>
<span className="feature--feature-phones">
{minus}
</span>
</span>
<button className="delete-feature" onClick={e => handleDeleteClick(e, featureName)}>X</button>
</li>
)
})
}
const featureMap = getFeatureMap(featureObject);
const featureMapJSX = getFeatureMapJSX(featureMap);
return featureMapJSX;
}
const parseNewPhones = somePhones => {
if (somePhones === '') return [''];
return somePhones.split('/').map(phone => phone.trim());
}
const handleClickDispatch = e => dispatchFunction => actionBuilder => actionParameters => {
e.preventDefault();
return dispatchFunction(actionBuilder(actionParameters));
}
const buildAddFeatureAction = ([newPositivePhones, newNegativePhones, feature]): featureAction => (
{
type: "ADD_FEATURE",
value: {
positivePhones: parseNewPhones(newPositivePhones),
negativePhones: parseNewPhones(newNegativePhones),
feature
}
}
)
return (
<div className="Features" data-testid="Features">
<h3>Phonetic Features</h3>
<ul className="Features__list" data-testid="Features-list">
{phones ? <>{parsePhonesFromFeatureObject(features)}</> : <></>}
</ul>
<form className="Features__form" data-testid="Features-form">
<input
type="text" name="feature"
value={feature} onChange={e=> setFeature(e.target.value)}
></input>
{/* ! Positive Phones */}
<label htmlFor="positive-phones">+
<input
id="positive-phones"
type="text" name="phonemes"
value={newPositivePhones} onChange={e=> setNewPositivePhones(e.target.value)}
></input>
</label>
{/* ! Negative Phones */}
<label htmlFor="negative-phones">-
<input
id="negative-phones"
type="text" name="phonemes"
value={newNegativePhones} onChange={e=> setNewNegativePhones(e.target.value)}
></input>
</label>
<input
className="form form--add"
type="submit"
onClick={e => handleClickDispatch(e)(dispatch)(buildAddFeatureAction)([newPositivePhones, newNegativePhones, feature])}
value="Add feature"
></input>
</form>
</div>
);
}
export default Features;

View file

@ -1,42 +0,0 @@
div.Features {
ul.Features__list {
width: 100%;
li {
display: grid;
gap: 0.5em;
grid-template-columns: 10fr 10fr 1fr;
margin: 0.5em 0;
place-items: center center;
span.feature--names-and-phones {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(100px, 1fr));
place-items: center center;
}
span.feature-name {
font-weight: 600;
}
}
}
form {
display: flex;
flex-flow: column;
input {
margin: 0.1em;
font-size: 1em;
}
}
button.delete-feature {
background-color: red;
border-color: transparent;
border-radius: 0.5em;
color: white;
max-height: 1.5em;
}
}

View file

@ -1,22 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import Features from './Features';
import renderer from 'react-test-renderer';
import {render, fireEvent} from '@testing-library/react';
import extendExpect from '@testing-library/jest-dom/extend-expect'
it('renders Features without crashing', () => {
const div = document.createElement('div');
ReactDOM.render(<Features />, div);
ReactDOM.unmountComponentAtNode(div);
});
describe('Features', () => {
it('renders the correct subtitle', () => {
const { getByTestId } = render(<Features />);
expect(getByTestId('Features')).toHaveTextContent('Phonetic Features');
});
});

View file

@ -1,28 +0,0 @@
import React from 'react';
import './Latl.scss';
const Latl = ({latl, dispatch}) => {
const { innerWidth, innerHeight } = window;
const handleChange = e => {
const setLatlAction = {
type: 'SET_LATL',
value: e.target.value
}
dispatch(setLatlAction);
}
return (
<div className="Latl">
<h3>.LATL</h3>
<textarea name="latl" id="latl"
value={latl}
cols={'' + Math.floor(innerWidth / 15)}
rows={'' + Math.floor(innerHeight / 30)}
onChange={handleChange}
/>
</div>
);
}
export default Latl;

View file

@ -1,3 +0,0 @@
div.Latl {
min-width: 80vw;
}

View file

@ -1,69 +0,0 @@
import React from 'react';
import './LatlOutput.scss';
import Output from './Output';
const LatlOutput = ({results, options, dispatch, errors, parseResults}) => {
const handleClick = e => dispatchFunc => {
e.preventDefault()
return dispatchFunc();
}
const dispatchClear = () => {
const clearAction = {
type: 'CLEAR',
value: {}
}
dispatch(clearAction)
}
const dispatchParse = () => {
const parseAction = {
type: 'PARSE_LATL',
value: {}
}
dispatch(parseAction)
}
const dispatchRun = () => {
const runAction = {
type: 'RUN',
value: {}
}
dispatch(runAction)
}
return (
<div className="LatlOutput">
<h3>Output</h3>
<form>
<input
className="form form--remove"
type="submit"
onClick={e=>handleClick(e)(dispatchClear)}
value="Clear"
/>
<input
id="Parse"
name="Parse"
className="form form--add"
type="submit"
onClick={e=>handleClick(e)(dispatchParse)}
value="Parse"
/>
<input
id="Run"
name="Run"
className="form form--add"
type="submit"
onClick={e=>handleClick(e)(dispatchRun)}
value="Run"
/>
</form>
<Output results={results} errors={errors} options={options} parseResults={parseResults}/>
</div>
);
}
export default LatlOutput;

View file

@ -1,9 +0,0 @@
div.LatlOutput {
display: flex;
flex-flow: column nowrap;
form {
display: grid;
grid-template-columns: repeat(auto-fit, min-max(10em, 1fr));
}
}

View file

@ -1,90 +0,0 @@
import React, { useState } from 'react';
import './Options.scss';
import ls from 'local-storage';
const Options = ({ options, dispatch }) => {
const [ load, setLoad ] = useState('');
const handleRadioChange = e => {
const { name, id } = e.target;
dispatch({
type: 'SET_OPTIONS',
value: {
option: name,
setValue: id
}
});
}
const handleFormSubmit = (e, options) => {
e.preventDefault();
dispatch({
type: 'RUN',
value: options
});
}
const handleOutputClearSubmit = e => {
e.preventDefault();
console.log('clearing')
dispatch({
type: 'CLEAR',
value: {}
});
}
return (
<div className="Options" data-testid="Options">
<h3>Modeling Options</h3>
<form onSubmit={e=>handleFormSubmit(e, options)} data-testid="Options-form">
<input
type="radio" name="output" id="default"
checked={options ? options.output === 'default' : true}
onChange={e=>handleRadioChange(e)}
/>
<label htmlFor="default">Default
<span className="Options__output-example"> output</span>
</label>
{/* <input
type="radio" name="output" id="proto"
checked={options ? options.output === 'proto' : false}
onChange={e=>handleRadioChange(e)}
/>
<label htmlFor="proto">Proto
<span className="Options__output-example"> output [proto]</span>
</label>
<input
type="radio" name="output" id="diachronic"
checked={options ? options.output === 'diachronic' : false}
onChange={e=>handleRadioChange(e)}
/>
<label htmlFor="diachronic">Diachronic
<span className="Options__output-example"> *proto > *epoch > output</span>
</label> */}
<input className="form form--add" type="submit" value="Run Changes"></input>
<input className="form form--remove" type="button" value="Clear Output" onClick={e=>handleOutputClearSubmit(e)}/>
</form>
{/* <form onSubmit={()=>{}}>
<label>
Load from a prior run:
<select value={load} onChange={e=>setLoad(e.target.value)}>
{localStorage.phonoChange
? ls.get('phonoChange').map(priorRun => {
return <option key={priorRun.name} value={priorRun.name}>{priorRun.name}</option>
}
) : <></>}
</select>
</label>
<input type="submit" value="Submit" />
</form> */}
</div>
);
}
export default Options;

View file

@ -1,9 +0,0 @@
div.Options {
form {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 0.5em;
}
}

View file

@ -1,22 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import Options from './Options';
import renderer from 'react-test-renderer';
import {render, fireEvent} from '@testing-library/react';
import extendExpect from '@testing-library/jest-dom/extend-expect'
it('renders Options without crashing', () => {
const div = document.createElement('div');
ReactDOM.render(<Options />, div);
ReactDOM.unmountComponentAtNode(div);
});
describe('Options', () => {
it('renders the correct subtitle', () => {
const { getByTestId } = render(<Options />);
expect(getByTestId('Options')).toHaveTextContent('Modeling Options');
});
});

View file

@ -1,38 +0,0 @@
import React from 'react';
import './Output.scss';
const Output = props => {
const { results, options, errors, parseResults } = props;
const renderResults = () => {
switch(options.output) {
case 'default':
return renderDefault();
default:
return <></>
}
}
const renderDefault = () => {
return results.map((epoch, i) => {
const lexicon = epoch.lexicon.map((lexeme, i) => <span key={`${epoch.pass}-${i}`}>{lexeme}</span>);
return (
<div key={`epoch-${i}`} className="Output-epoch">
<h5>{epoch.pass}</h5>
<p className="lexicon">{lexicon}</p>
</div>
)
})
}
return (
<div className="Output" data-testid="Output">
<h3>Results of Run</h3>
<div data-testid="Output-lexicon" className="Output__container">
{parseResults ? parseResults : <></>}
{results && results.length ? renderResults() : <></>}
</div>
</div>
);
}
export default Output;

View file

@ -1,18 +0,0 @@
div.Output {
div.Output__container {
display: flex;
flex-flow: row wrap;
}
div.Output-epoch {
display: flex;
flex-flow: column;
p.lexicon {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(5em, 1fr));
}
}
}

View file

@ -1,29 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import Output from './Output';
import renderer from 'react-test-renderer';
import {render, fireEvent} from '@testing-library/react';
import extendExpect from '@testing-library/jest-dom/extend-expect'
it('renders Output without crashing', () => {
const div = document.createElement('div');
ReactDOM.render(<Output />, div);
ReactDOM.unmountComponentAtNode(div);
});
describe('Output', () => {
it('renders the correct subtitle', () => {
const { getByTestId } = render(<Output />);
expect(getByTestId('Output')).toHaveTextContent('Results of Run');
});
it('renders output lexicon list from output hook', () => {
const { getByTestId } = render(<Output results={[{pass: 'test', lexicon: ['word', 'lex', 'word']}]} options={{output: 'default'}}/>);
expect(getByTestId('Output-lexicon')).toContainHTML(wordListWordHTML);
});
});
const wordListWordHTML = '<div class="Output-epoch"><h5>test</h5><p class="lexicon"><span>word</span><span>lex</span><span>word</span></p></div>';

View file

@ -1,43 +0,0 @@
import React from 'react';
import './ProtoLang.scss';
const ProtoLang = ({ lexicon, dispatch }) => {
const getProperty = property => object => object[property];
const renderLexicon = () => {
if (!lexicon) return '';
// Code for optionally rendering epoch name with lexeme
// `\t#${lexeme.epoch.name}`
return lexicon.map(getProperty('lexeme')).join('\n');
}
const handleChange = e => {
const value = e.target.value.split(/\n/).map(line => {
const lexeme = line.split('#')[0].trim();
const epoch = line.split('#')[1] || '';
return { lexeme, epoch }
})
dispatch({
type: 'SET_LEXICON',
value
})
}
return (
<div className="ProtoLang" data-testid="ProtoLang">
<h3>Proto Language Lexicon</h3>
<br />
<form data-testid="ProtoLang-Lexicon">
<textarea
name="lexicon"
cols="30"
rows="10"
data-testid="ProtoLang-Lexicon__textarea"
value={renderLexicon()}
onChange={e => handleChange(e)}
>
</textarea>
</form>
</div>
);
}
export default ProtoLang;

View file

@ -1,27 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import ProtoLang from './ProtoLang';
import renderer from 'react-test-renderer';
import { exportAllDeclaration } from '@babel/types';
import {render, fireEvent} from '@testing-library/react';
import extendExpect from '@testing-library/jest-dom/extend-expect'
it('renders ProtoLang without crashing', () => {
const div = document.createElement('div');
ReactDOM.render(<ProtoLang />, div);
ReactDOM.unmountComponentAtNode(div);
});
describe('ProtoLang', () => {
it('renders the correct subtitle', () => {
const { getByTestId } = render(<ProtoLang />);
expect(getByTestId('ProtoLang')).toHaveTextContent('Proto Language Lexicon');
});
it('renders lexicon from state', () => {
const { getByTestId } = render(<ProtoLang lexicon={[{ lexeme:'one', epoch:{name: 'epoch-one', changes: []} }]}/>);
expect(getByTestId('ProtoLang-Lexicon')).toHaveFormValues({lexicon: 'one'});
});
})

View file

@ -1,110 +0,0 @@
import React, { useState, useEffect } from 'react';
import './SoundChangeSuite.scss';
const SoundChangeSuite = props => {
const { epochIndex, error, removeEpoch, epochs } = props;
const [ epoch, setEpoch ] = useState(props.epoch ? props.epoch : {name:'', changes:[''], parent:'none'});
const changeHandler = (e,cb) => {
cb(e);
props.updateEpoch(epoch, epochIndex);
}
useEffect(() => {
props.updateEpoch(epoch, epochIndex);
}, [epoch])
const renderOptionFromEpoch = thisEpoch => (
<option
key={`${epoch.name}__parent-option--${thisEpoch.name}`}
value={thisEpoch.name}
>
{thisEpoch.name}
</option>
)
const replaceCurrentEpoch = thisEpoch => {
if (thisEpoch.name === epoch.name) return {name: 'none'}
return thisEpoch;
}
const isViableParent = thisEpoch => {
if (thisEpoch.parent && thisEpoch.parent === epoch.name) return false;
return true;
}
const parentsOptions = () => {
return epochs.map(replaceCurrentEpoch).filter(isViableParent).map(renderOptionFromEpoch)
}
const renderParentInput = () => {
if (epochIndex) return (
<>
<label htmlFor={`${epoch.name}-parent`}>
Parent Epoch:
</label>
<select
name="parent"
list={`${epoch.name}-parents-list`}
value={epoch.parent || 'none'}
onChange={e=>changeHandler(
e, ()=>setEpoch({...epoch, parent:e.target.value})
)
}
>
{parentsOptions()}
</select>
</>
)
return <></>
}
const renderError = () => {
if (error) return (
<p className="error">{error}</p>
)
return <></>
}
return (
<>
<h4>{epoch.name}</h4>
{renderError()}
<form className="SoundChangeSuite__form" data-testid={`${epoch.name}_SoundChangeSuite_changes`}>
<label htmlFor={`${epoch.name}-name`}>
Name:
</label>
<input type="text"
name="epoch"
id={`${epoch.name}-name`} cols="30" rows="1"
value={epoch.name}
onChange={e=>changeHandler(
e, () => {
setEpoch({...epoch, name:e.target.value})
}
)}
></input>
{renderParentInput()}
<textarea
name="changes"
id="" cols="30" rows="10"
value={epoch.changes.join('\n')}
onChange={e=> changeHandler(
e, ()=>setEpoch(
{...epoch, changes:e.target.value.split(/\n/).map(change=>change === ' '
? '[+ feature]>[- feature]/_#'
: change
)}
)
)}
></textarea>
</form>
<form onSubmit={e=>removeEpoch(e, epoch.name)}>
<input className="form form--remove" type="submit" name="remove-epoch" value={`remove ${epoch.name}`}></input>
</form>
</>
);
}
export default SoundChangeSuite;

View file

@ -1,25 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import SoundChangeSuite from './SoundChangeSuite';
import renderer from 'react-test-renderer';
import { exportAllDeclaration } from '@babel/types';
import {render, fireEvent} from '@testing-library/react';
import extendExpect from '@testing-library/jest-dom/extend-expect'
it('renders SoundChangeSuite without crashing', () => {
const div = document.createElement('div');
ReactDOM.render(<SoundChangeSuite epoch={{name:'Epoch Name', changes:['sound change rule']}} updateEpoch={()=>{}} removeEpoch={()=>{}}/>, div);
ReactDOM.unmountComponentAtNode(div);
});
describe('SoundChangeSuite', () => {
it('renders a suite of soundchanges', () => {
const { getByTestId } = render(
<SoundChangeSuite epoch={{name:'Epoch Name', changes:['sound>change/environment']}}
updateEpoch={()=>{}} removeEpoch={()=>{}}
/>
);
expect(getByTestId('Epoch Name_SoundChangeSuite_changes')).toHaveFormValues({changes: 'sound>change/environment'})
})
});

View file

@ -1,14 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import { HashRouter as Router } from 'react-router-dom';
import './index.scss';
import App from './App';
import * as serviceWorker from './serviceWorker';
ReactDOM.render(<Router><App /></Router>
, document.getElementById('root'));
// If you want your app to work offline and load faster, you can change
// unregister() to register() below. Note this comes with some pitfalls.
// Learn more about service workers: https://bit.ly/CRA-PWA
serviceWorker.unregister();

View file

@ -1,24 +0,0 @@
@import '../public/stylesheets/variables';
body {
margin: 0;
font-family: 'Catamaran', Arial, Helvetica, sans-serif;
background-color: map-get($colors, 'main--bg');
color: map-get($colors, 'main');
textarea, input[type="text"] {
background-color: map-get($colors, 'text-input--bg');
color: map-get($colors, 'text-input');
border: 1px solid map-get($colors, 'main');
font-family: 'Fira Code', monospace;
}
code {
font-family: 'Fira Code', monospace;
}
p.error {
color: map-get($colors, 'error');
}
}

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 8 KiB

View file

@ -1,3 +0,0 @@
export const clearOutput = (state, action) => {
return { ...state, results: [], errors: {}, parseResults: '' };
}

View file

@ -1,41 +0,0 @@
// @flow
import type { stateType } from './reducer';
export type epochAction = {
type: "ADD_EPOCH" | "SET_EPOCH" | "REMOVE_EPOCH",
value: {
index?: number,
name: string,
changes?: Array<string>,
parent?: string
}
}
export const addEpoch = (state: stateType, action: epochAction): stateType => {
const newEpoch = { name: action.value.name, changes: action.value.changes || [''], parent: null};
return {...state, epochs: [...state.epochs, newEpoch]}
}
export const setEpoch = (state: stateType, action: epochAction): stateType => {
const index = action.value.index;
if (typeof index !== 'number') return state;
const mutatedEpochs = state.epochs;
mutatedEpochs[index].name = action.value.name
? action.value.name
: mutatedEpochs[index].name;
mutatedEpochs[index].changes = action.value.changes
? action.value.changes
: mutatedEpochs[index].changes;
mutatedEpochs[index].parent = action.value.parent && action.value.parent !== 'none'
? action.value.parent
: null
return {...state, epochs: [...mutatedEpochs]}
}
export const removeEpoch = (state: stateType, action: epochAction): stateType => {
const mutatedEpochs = state.epochs.filter(epoch => epoch.name !== action.value.name )
return {...state, epochs: [...mutatedEpochs]}
}

View file

@ -1,63 +0,0 @@
import {stateReducer} from './reducer';
describe('Epochs', () => {
const state = {};
beforeEach(()=> {
state.epochs = [
{
name: 'epoch-1',
changes: [''],
parent: null
}
]
})
it('epochs returned unaltered', () => {
const action = {type: ''};
expect(stateReducer(state, action)).toBe(state);
});
it('epochs addition returns new epochs list', () => {
const action = {type: 'ADD_EPOCH', value: { name: 'epoch-2', changes: [''], parent: null}};
expect(stateReducer(state, action)).toEqual({...state, epochs: [...state.epochs, action.value]})
})
it('epoch-name mutation returns new epochs list with mutation', () => {
const firstAction = {type: 'ADD_EPOCH', value: { name: 'epoch-2', changes: ['']}};
const secondAction = {type: 'SET_EPOCH', value: { index: 0, name: 'proto-lang'}};
const secondState = stateReducer(state, firstAction);
expect(stateReducer(secondState, secondAction)).toEqual(
{...state,
epochs: [
{name: 'proto-lang', changes: [''], parent: null},
{name: 'epoch-2', changes: [''], parent: null}
]
}
);
});
it('epoch changes mutation returns new epochs list with mutation', () => {
const firstAction = {type: 'ADD_EPOCH', value: { name: 'epoch-2', changes: ['']}};
const secondAction = {type: 'SET_EPOCH', value: { index: 0, changes: ['n>t/_#', '[+plosive]>[+nasal -plosive]/_n']}};
const secondState = stateReducer(state, firstAction);
expect(stateReducer(secondState, secondAction)).toEqual(
{...state,
epochs: [
{name: 'epoch-1', changes: ['n>t/_#', '[+plosive]>[+nasal -plosive]/_n'], parent: null},
{name: 'epoch-2', changes: [''], parent: null}
]
}
);
});
it('epochs returned with deleted epoch removed', () => {
const firstAction = {type: 'ADD_EPOCH', value: { name: 'epoch-2', changes: ['']}};
const stateWithTwoEpochs = stateReducer(state, firstAction);
const secondAction = {type: 'REMOVE_EPOCH', value: {index: 0, name: 'epoch-1'}}
expect(stateReducer(stateWithTwoEpochs, secondAction)).toEqual({
...state,
epochs: [{ name: 'epoch-2', changes: [''], parent: null}]
});
});
});

View file

@ -1,95 +0,0 @@
// @flow
import type { stateType } from './reducer';
export type featureAction = {
type: "ADD_FEATURE",
value: {
positivePhones: Array<string>,
negativePhones: Array<string>,
feature: string
}
}
const addPhones = (phones: {}, phone: string): {} => {
let node = {};
phone.split('').forEach((graph, index) => {
if (index) node[graph] = {}
if (!index && !phones[graph]) phones[graph] = {}
node = index === 0 ? phones[graph] : node[graph];
if (index === phone.length - 1) node.grapheme = phone;
})
return phones;
}
const findPhone = (phones: {}, phone: string): {} => {
return phone
.split('')
.reduce((node, graph, index) => {
node = index === 0 ? phones[graph] : node[graph];
return node;
}, {});
}
const addFeatureToPhone = (
phones: {}, phone: string, featureKey: string, featureValue: boolean
): {} => {
try {
let node = {}
phone.split('').forEach((graph, index) => {
node = index === 0 ? phones[graph] : node[graph];
if (index === phone.split('').length - 1) {
node.features = node && node.features
? {...node.features, [featureKey]: featureValue }
: {[featureKey]: featureValue};
}
});
return phones;
}
catch (e) {
throw { phones, phone, featureKey, featureValue }
}
}
export const addFeature = (state: stateType, action: featureAction): stateType => {
let positivePhones = action.value.positivePhones || [];
let negativePhones = action.value.negativePhones || [];
let newFeatureName = action.value.feature;
let newPhoneObject = [
...positivePhones, ...negativePhones
]
.reduce((phoneObject, phone) => addPhones(phoneObject, phone), state.phones)
if (positivePhones) {
positivePhones.reduce(
(phoneObject, positivePhone) => addFeatureToPhone(phoneObject, positivePhone, newFeatureName, true)
, newPhoneObject
);
positivePhones = positivePhones.map( positivePhone => findPhone(newPhoneObject, positivePhone) )
}
if (negativePhones) {
negativePhones.reduce(
(phoneObject, positivePhone) => addFeatureToPhone(phoneObject, positivePhone, newFeatureName, false)
, newPhoneObject
);
negativePhones = negativePhones.map( negativePhone => findPhone(newPhoneObject, negativePhone) )
}
let newFeature = {[action.value.feature]: {positive: positivePhones, negative: negativePhones}};
return {...state, features:{...state.features, ...newFeature}, phones: newPhoneObject}
}
export const deleteFeature = (state, action) => {
const deletedFeature = state.features[action.value];
deletedFeature.positive.forEach(phone => delete phone.features[action.value])
deletedFeature.negative.forEach(phone => delete phone.features[action.value])
delete state.features[action.value];
return state
}

View file

@ -1,47 +0,0 @@
import {stateReducer} from './reducer';
describe('Features', () => {
const state = {}
beforeEach(() => {
state.phones = {
a: {features: {occlusive: true}, grapheme: 'a'},
n: {features: {occlusive: false}, grapheme: 'n'}
};
state.features = {
occlusive: {
positive: [state.phones.n],
negative: [state.phones.a]
}
};
});
it('features returned unaltered', () => {
const action = {type: ''};
expect(stateReducer(state, action)).toBe(state);
});
it('feature addition returns new feature list', () => {
const action = {type: 'ADD_FEATURE', value: {feature: 'anterior'}};
expect(stateReducer(state, action)).toEqual(
{...state,
features:{...state.features,
anterior:{ positive:[], negative:[] }
}
}
);
});
it('feature deletion returns new feature list', () => {
const action = {type: 'DELETE_FEATURE', value: 'occlusive'}
expect(stateReducer(state, action)).toEqual(
{...state,
features: {},
phones: {
a: {features: {}, grapheme: 'a'},
n: {features: {}, grapheme: 'n'}
}
}
)
})
});

View file

@ -1,739 +0,0 @@
// @flow
import type { stateType } from './reducer';
export type initAction = {
type: "INIT"
}
export const clearState = () => {
return {
epochs: [],
phones: {},
options: { output: 'default', save: false },
results: [],
errors: {},
features: {},
lexicon: [],
latl: '',
parseResults: ''
}
}
export const waffleState = () => {
return {
epochs: [],
phones: {},
options: { output: 'default', save: false },
results: [],
errors: {},
features: {},
lexicon: [],
latl: waffleLatl,
parseResults: ''
}
}
export const initState = (changesArgument: number): stateType => {
const state = {
epochs: [
{
name: 'epoch-1',
changes: [
'[+ occlusive - nasal]>[+ occlusive + nasal]/n_.',
'a>ɯ/._#',
'[+ sonorant - low rounded high back]>0/._.',
'[+ obstruent]>[+ obstruent aspirated ]/#_.',
'[+ sonorant - rounded]>[+ sonorant + rounded]/._#',
// 'at>ta/._#'
]
}
],
phones: {
a: {
grapheme: 'a', features: {
sonorant: true, back: true, low: true, high: false, rounded: false
}
},
u: {
grapheme: 'u', features: {
sonorant: true, back: true, low: false, high: true, rounded: true,
}
},
ɯ: {
grapheme: 'ɯ', features: {
sonorant: true, back: true, low: false, high: true, rounded: false,
}
},
ə: {
grapheme: 'ə', features: {
sonorant: true, low: false, rounded: false, high: false, back: false
}
},
t: {
grapheme: 't', features: {
occlusive: true, coronal: true, obstruent: true, nasal: false
},
ʰ: {
grapheme: 'tʰ', features: {
occlusive: true, coronal: true, obstruent: true, aspirated: true
}
}
},
n: {
grapheme: 'n', features: {
sonorant: true, nasal: true, occlusive: true, coronal: true
}
}
},
options: {
output: 'default', save: false
},
results: [],
errors: {},
features: {},
lexicon: [],
latl: '',
parseResults: ''
};
state.features = {
sonorant: { positive:[ state.phones.a, state.phones.u, state.phones.ɯ, state.phones.ə, state.phones.n], negative: [] },
back: { positive:[ state.phones.a, state.phones.u, state.phones.ɯ ], negative: [ state.phones.ə ] },
low: { positive:[ state.phones.a ], negative: [ state.phones.u, state.phones.ɯ, state.phones.ə ] },
high: { positive:[ state.phones.u, state.phones.ɯ ], negative: [ state.phones.a, state.phones.ə ] },
rounded: { positive:[ state.phones.u ], negative: [ state.phones.a, state.phones.ɯ, state.phones.ə ] },
occlusive: { positive:[ state.phones.t, state.phones.n, state.phones.t.ʰ ], negative: [] },
coronal: { positive:[ state.phones.t, state.phones.n, state.phones.t.ʰ ], negative: [] },
obstruent: { positive:[ state.phones.t, state.phones.n, state.phones.t.ʰ ], negative: [] },
nasal: { positive:[ state.phones.n ], negative: [state.phones.t, state.phones.t.ʰ] },
aspirated: { positive:[ state.phones.t.ʰ ], negative: [ state.phones.t ] },
}
state.lexicon = [
{lexeme: 'anta', epoch: state.epochs[0]},
{lexeme: 'anat', epoch: state.epochs[0]},
{lexeme: 'anət', epoch: state.epochs[0]},
{lexeme: 'anna', epoch: state.epochs[0]},
{lexeme: 'tan', epoch: state.epochs[0]},
{lexeme: 'ənta', epoch: state.epochs[0]}
]
if(changesArgument > -1) state.epochs[0].changes = state.epochs[0].changes.splice(0, changesArgument)
return state;
}
const waffleLatl = `
; -------- main class features
[consonantal
+=
; PLOSIVES
p / pʼ / / t / tʼ / ɾ / k / kʼ / /
; AFFRICATES
/ /
; FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ / ç / x /
; NASALS
m ɱ / n / ŋ /
; LIQUIDS + RHOTICS
l / ɹ ɹʲ ɹˤ /
; SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩
-=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; GLIDES
j / w /
; LARYNGEALS
h ɦ / ʔ
]
[sonorant
+=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; GLIDES
j / w w̥ /
; LIQUIDS + RHOTICS
l / ɹ ɹʲ ɹˤ /
; NASALS
m ɱ / n / ŋ /
; SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩
-=
; PLOSIVES
p / pʼ / / t / tʼ / ɾ / k / kʼ / /
; AFFRICATES
/ /
; FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ / ç / x /
; LARYNGEALS
h ɦ / ʔ
]
[approximant
+=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; LIQUIDS + RHOTICS
l / ɹ ɹʲ ɹˤ /
; GLIDES
j / w /
; SYLLABIC LIQUIDS
l̩ / ɹ̩
-=
; PLOSIVES
p / pʼ / / t / tʼ / ɾ / k / kʼ / /
; AFFRICATES
/ /
; FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ / ç / x /
; NASALS
m ɱ / n / ŋ /
; SYLLABIC NASALS
m̩ / n̩
]
; -------- laryngeal features
[voice
+=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; GLIDES
j / w /
; LIQUIDS + RHOTICS
l / ɹ ɹʲ ɹˤ /
; NASALS
m ɱ / n / ŋ /
; SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩ /
; VOICED FRICATIVES
v / ð / z / ʒ /
; VOICED AFFRICATES
/
; VOICED LARYNGEALS
; LARYNGEALS
ɦ
-= voiceless obstruents
; PLOSIVES
p / pʼ / / t / tʼ / ɾ / k / kʼ / /
; VOICELESS AFFRICATES
/ /
; VOICELESS FRICATIVES
f / θ / s / ʃ / ç / x /
; VOICELESS LARYNGEALS
h / ʔ
]
[spreadGlottis
+=
; ASPIRATED PLOSIVES
/ / /
; ASPIRATED AFFRICATES
/
; SPREAD LARYNGEALS
h ɦ
-=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; UNASPIRATED PLOSIVES
p / pʼ / t / tʼ / ɾ / k / kʼ /
; UNASPIRATED AFFRICATES
/ /
; FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ / ç / x /
; NASAL OBSTRUENTS
m ɱ / n / ŋ /
; LIQUIDS + RHOTICS
l / ɹ ɹʲ ɹˤ /
; SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩ /
; GLIDES
j / w
; CONSTRICTED LARYNGEALS
ʔ
]
[constrictedGlottis
+=
; LARYNGEALIZED RHOTIC
ɹˤ /
; CONSTRICTED LARYNGEAL
ʔ /
; EJECTIVE PLOSIVES
pʼ / tʼ / kʼ
-=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; PLOSIVES
p / / t / ɾ / k / /
; AFFRICATES
/ /
; FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ / ç / x /
; NASAL OBSTRUENTS
m ɱ / n / ŋ /
; LIQUIDS
l /
; NON-PHARYNGEALIZED RHOTICS
ɹ ɹʲ /
; SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩
; GLIDES
j / w
; SPREAD LARYNGEALS
h ɦ /
]
; -------- manner features
[continuant
+=
; FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ / ç / x /
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; LIQUIDS + RHOTICS
l / ɹ ɹʲ ɹˤ /
; GLIDES
j / w /
; SYLLABIC LIQUIDS
l̩ / ɹ̩ /
; TAPS
ɾ
-=
; NON-TAP PLOSIVES
p / pʼ / / t / tʼ / / k / kʼ / /
; AFFRICATES
/ /
; NASALS
m ɱ / n / ŋ /
; SYLLABIC NASALS
m̩ / n̩
]
[nasal
+=
; NASALS
m ɱ / n / ŋ /
; SYLLABIC NASALS
m̩ / n̩
-=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ / ç / x /
; LIQUIDS + RHOTICS
l / ɹ ɹʲ ɹˤ /
; GLIDES
j / w /
; SYLLABIC LIQUIDS
l̩ / ɹ̩ /
; PLOSIVES
p / pʼ / / t / tʼ / ɾ / k / kʼ / /
; AFFRICATES
/ /
]
[strident
+=
; STRIDENT FRICATIVES
f / v / s / z / ʃ / ʒ /
; STRIDENT AFFRICATES
/
-=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; PLOSIVES
p / pʼ / / t / tʼ / ɾ / k / kʼ / /
; NON-STRIDENT FRICATIVES
θ / ð / ç / x /
; NASAL OBSTRUENTS
m ɱ / n / ŋ /
; RHOTICS + LIQUIDS
l / ɹ ɹʲ ɹˤ /
; SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩ /
; GLIDES
j / w
]
[lateral
+=
; LATERAL LIQUIDS
l /
; SYLLABIC LATERALS /
l̩
-=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; PLOSIVES
p / pʼ / / t / tʼ / ɾ / k / kʼ /
; AFFRICATES
/
; FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ / ç / x
; NASAL OBSTRUENTS
m ɱ / n / ŋ
; RHOTIC LIQUIDS
ɹ ɹʲ ɹˤ
; NON-LIQUID SYLLABIC CONSONANTS
m̩ / n̩ / ɹ̩
; GLIDES
j / w
]
; -------- ---- PLACE features
; -------- labial features
[labial
+=
; ROUNDED VOWELS
u̟ / ʊ̞ / ɔ /
; LABIAL PLOSIVES
p / pʼ / /
; LABIAL FRICATIVES
f / v /
; LABIAL NASALS
m ɱ /
; LABIAL SYLLABIC CONSONANTS
m̩ /
; LABIAL GLIDES
w
-=
; UNROUNDED VOWELS
æ / e / ə / ɑ / ɪ̞ / ɛ / ʌ / i /
; NON-LABIAL PLOSIVES
t / tʼ / ɾ / k / kʼ / /
; NON-LABIAL AFFRICATES
/ /
; NON-LABIAL FRICATIVES
θ / ð / s / z / ʃ / ʒ / ç / x /
; NON-LABIAL NASAL OBSTRUENTS
n / ŋ /
; LIQUIDS
l /
; RHOTIC LIQUIDS
ɹ ɹʲ ɹˤ /
; NON-LABIAL SYLLABIC CONSONANTS
n̩ / l̩ / ɹ̩ /
; NON-LABIAL GLIDES
j
]
; -------- coronal features
[coronal
+=
; CORONAL PLOSIVES
t / tʼ / ɾ /
; CORONAL AFFRICATES
/ /
; CORONAL FRICATIVES
θ / ð / s / z / ʃ / ʒ /
; CORONAL NASALS
n /
; CORONAL LIQUIDS
l
; CORONAL RHOTIC LIQUIDS
ɹ
; CORONAL SYLLABIC CONSONANTS
n̩ / l̩ / ɹ̩
-=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; NON-CORONAL PLOSIVES
p / pʼ / / k / kʼ /
; NON-CORONAL FRICATIVES
f / v / ç / x
; NON-CORONAL NASAL OBSTRUENTS
m ɱ / ŋ
; NON-CORONAL RHOTIC LIQUIDS
ɹʲ ɹˤ
; NON-CORONAL SYLLABIC CONSONANTS
m̩ /
; NON-CORONAL GLIDES
j / w
]
[anterior
+=
; ALVEOLAR PLOSIVES
t / tʼ / ɾ /
; ALVEOLAR AFFRICATES
/ /
; DENTAL FRICATIVES
θ / ð /
; ALVEOLAR FRICATIVES
s / z /
; ALVEOLAR NASALS
n /
; ALVEOLAR LIQUIDS
l
; ALVEOLAR SYLLABIC CONSONANTS
n̩ / l̩ /
-=
; POSTALVEOLAR FRICATIVES
ʃ / ʒ /
; POSTALVEOLAR RHOTIC LIQUIDS
ɹ /
; POSTALVEOLAR SYLLABIC CONSONANTS
ɹ̩ /
; -- NON-CORONALs
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; NON-CORONAL PLOSIVES
p / pʼ / / k / kʼ /
; NON-CORONAL FRICATIVES
f / v / ç / x
; NON-CORONAL NASAL OBSTRUENTS
m ɱ / ŋ
; NON-CORONAL RHOTIC LIQUIDS
ɹʲ ɹˤ
; NON-CORONAL SYLLABIC CONSONANTS
m̩ /
; NON-CORONAL GLIDES
j / w
]
[distributed
+=
; DENTAL FRICATIVES
θ / ð /
; POSTALVEOLAR FRICATIVES
ʃ / ʒ /
; POSTALVEOLAR RHOTIC LIQUIDS
ɹ /
; POSTALVEOLAR SYLLABIC CONSONANTS
ɹ̩ /
-=
; apical / retroflex
; ALVEOLAR PLOSIVES
t / tʼ / ɾ /
; ALVEOLAR FRICATIVES
s / z /
; ALVEOLAR NASALS
n /
; ALVEOLAR LIQUIDS
l
; ALVEOLAR SYLLABIC CONSONANTS
n̩ / l̩ /
; -- NON-CORONALS
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; NON-CORONAL PLOSIVES
p / pʼ / / k / kʼ /
; NON-CORONAL FRICATIVES
f / v / ç / x
; NON-CORONAL NASAL OBSTRUENTS
m ɱ / ŋ
; NON-CORONAL RHOTIC LIQUIDS
ɹʲ ɹˤ
; NON-CORONAL SYLLABIC CONSONANTS
m̩ /
; NON-CORONAL GLIDES
j / w
]
; -------- dorsal features
[dorsal
+=
; VOWELS
æ / e / ə / ɑ / ɔ / ɪ̞ / ɛ / ʌ / ʊ̞ / i / u̟ /
; DORSAL PLOSIVES
k / kʼ / /
; DORSAL FRICATIVES
ç / x /
; DORSAL NASAL OBSTRUENTS
ŋ /
; DORSAL RHOTIC LIQUIDS
ɹʲ ɹˤ
; DORSAL GLIDES
j
-=
; NON-DORSAL PLOSIVES
p / pʼ / / t / tʼ / ɾ /
; NON-DORSAL AFFRICATES
/ /
; NON-DORSAL FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ /
; NON-DORSAL NASALS
m ɱ / n /
; NON-DORSAL LIQUIDS
l
; NON-DORSAL RHOTIC LIQUIDS
ɹ
; NON-DORSAL SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩
; NON-DORSAL GLIDES
w
]
[high
+=
; HIGH VOWELS
i / u̟ / ʊ̞ / ɪ̞
; HIGH DORSAL PLOSIVES
k / kʼ / /
; HIGH DORSAL FRICATIVES
ç / x /
; HIGH DORSAL NASAL OBSTRUENTS
ŋ /
; HIGH RHOTIC LIQUIDS
ɹʲ
; HIGH DORSAL GLIDES
j / w
-= χ / e / o / a
; NON-HIGH VOWELS
ɑ / æ / e / ə / ɛ / ʌ
; NON-HIGH RHOTIC LIQUIDS
ɹˤ
; -- NON-DORSALS
; NON-DORSAL PLOSIVES
p / pʼ / / t / tʼ / ɾ /
; NON-DORSAL AFFRICATES
/ /
; NON-DORSAL FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ /
; NON-DORSAL NASALS
m ɱ / n /
; NON-DORSAL LIQUIDS
l
; NON-DORSAL RHOTIC LIQUIDS
ɹ
; NON-DORSAL SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩
; NON-DORSAL GLIDES
w
]
[low
+=
; LOW VOWELS
ɑ / æ / ɛ /
; LOW DORSAL RHOTIC LIQUIDS
ɹˤ
-= a / ɛ / ɔ
; NON-LOW VOWELS
i / u̟ / ʊ̞ / ɪ̞ / e / ə / ʌ
; NON-LOW DORSAL PLOSIVES
k / kʼ / /
; NON-LOW DORSAL FRICATIVES
ç / x /
; NON-LOW DORSAL NASAL OBSTRUENTS
ŋ /
; NON-LOW DORSAL RHOTIC LIQUIDS
ɹʲ
; DORSAL GLIDES
j
; -- NON-DORSALS
; NON-DORSAL PLOSIVES
p / pʼ / / t / tʼ / ɾ /
; NON-DORSAL AFFRICATES
/ /
; NON-DORSAL FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ /
; NON-DORSAL NASALS
m ɱ / n /
; NON-DORSAL LIQUIDS
l
; NON-DORSAL RHOTIC LIQUIDS
ɹ
; NON-DORSAL SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩
; NON-DORSAL GLIDES
w
]
[back
+=
; BACK VOWELS
ɑ / ɔ / ʌ / ʊ̞ / u̟ /
; BACK DORSAL PLOSIVES
k / kʼ / /
; BACK DORSAL FRICATIVES
x /
; BACK DORSAL NASAL OBSTRUENTS
ŋ /
; BACK DORSAL RHOTIC LIQUIDS
ɹˤ
-=
; NON-BACK DORSAL FRICATIVES
ç /
; NON-BACK DORSAL RHOTIC LIQUIDS
ɹʲ
; NON-BACK DORSAL GLIDES
j
; NON-BACK VOWELS
æ / e / ə / ɪ̞ / ɛ / i
; -- NON-DORSALS
; NON-DORSAL PLOSIVES
p / pʼ / / t / tʼ / ɾ /
; NON-DORSAL AFFRICATES
/ /
; NON-DORSAL FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ /
; NON-DORSAL NASALS
m ɱ / n /
; NON-DORSAL LIQUIDS
l
; NON-DORSAL RHOTIC LIQUIDS
ɹ
; NON-DORSAL SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩
; NON-DORSAL GLIDES
w
]
[tense ; compare to ATR or RTR
+=
; TENSE VOWELS
e / i / u̟ / ɑ
-=
; NON-TENSE VOWELS
æ / ə / ɪ̞ / ɛ / ʌ / ʊ̞ / ɔ /
; DORSAL PLOSIVES
k / kʼ / /
; DORSAL FRICATIVES
ç / x /
; DORSAL NASAL OBSTRUENTS
ŋ /
; DORSAL RHOTIC LIQUIDS
ɹʲ ɹˤ /
; DORSAL GLIDES
j
; -- NON-DORSALS
; NON-DORSAL PLOSIVES
p / pʼ / / t / tʼ / ɾ /
; NON-DORSAL AFFRICATES
/ /
; NON-DORSAL FRICATIVES
f / v / θ / ð / s / z / ʃ / ʒ /
; NON-DORSAL NASALS
m ɱ / n /
; NON-DORSAL LIQUIDS
l
; NON-DORSAL RHOTIC LIQUIDS
ɹ
; NON-DORSAL SYLLABIC CONSONANTS
m̩ / n̩ / l̩ / ɹ̩
; NON-DORSAL GLIDES
w
]
*PROTO
; -- Devoicing, all our z's become s's
[+ voice - continuant]>[- voice]/._.
; -- Reduction of schwa
ə>0/._.
|Gif Lang
*PROTO
; -- Ejectivization, all our pits become pit's
[+ spreadGlottis - continuant]>[+ constrictedGlottis - spreadGlottis]/._[+ constrictedGlottis]
[+ spreadGlottis - continuant]>[+ constrictedGlottis - spreadGlottis]/[+ constrictedGlottis]_.
[+ constrictedGlottis]>0/[+ constrictedGlottis - continuant]_.
[+ constrictedGlottis]>0/._[+ constrictedGlottis - continuant]
|Jif Lang
`

View file

@ -1,74 +0,0 @@
// @flow
import { addLexeme, setLexicon } from './reducer.lexicon';
import type { lexiconAction } from './reducer.lexicon';
import { addEpoch, setEpoch, removeEpoch } from './reducer.epochs';
import type { epochAction } from './reducer.epochs';
import { addFeature, deleteFeature } from './reducer.features';
import type { featureAction } from './reducer.features';
import type { optionsAction } from './reducer.options';
import { setOptions } from './reducer.options';
import { run } from './reducer.results';
import type { resultsAction } from './reducer.results'
import { initState } from './reducer.init';
import type { initAction } from './reducer.init';
import { clearOutput } from './reducer.clear';
import { setLatl, parseLatl } from './reducer.latl';
export type stateType = {
lexicon: Array<{lexeme: string, epoch: epochType}>,
epochs: Array<epochType>,
phones: {[key: string]: phoneType},
options: {output: string, save: boolean},
results: [],
errors: {},
features: featureType
}
type epochType = {
name: string, changes: Array<string>
}
type phoneType = {
grapheme: string,
features: {[key: string]: boolean}
}
type featureType = {
[key: string]: {[key: string]: Array<phoneType>}
}
type actionType = featureAction | epochAction | initAction | resultsAction | lexiconAction
export const stateReducer = (state: stateType, action: actionType): stateType => {
switch (action.type) {
case 'INIT': {
return initState();
}
case 'ADD_LEXEME': return addLexeme(state, action);
case 'SET_LEXICON': return setLexicon(state, action);
case 'ADD_EPOCH': return addEpoch(state, action);
case 'SET_EPOCH': return setEpoch(state, action);
case 'REMOVE_EPOCH': return removeEpoch(state, action);
case 'ADD_FEATURE': return addFeature(state, action);
case 'DELETE_FEATURE': return deleteFeature(state, action);
case 'SET_OPTIONS': return setOptions(state, action);
case 'SET_LATL': return setLatl(state, action);
case 'PARSE_LATL': return parseLatl(state, action);
case 'CLEAR': return clearOutput(state, action);
case 'RUN': return run(state, action);
default: return state;
}
}

View file

@ -1,529 +0,0 @@
import { stateReducer } from './reducer';
export const setLatl = (state, action) => {
let latl = action.value;
return {...state, latl, parseResults: ''};
}
const getOneToken = (latl, tokens) => {
for (const [type, regEx] of tokenTypes) {
const newRegEx = new RegExp(`^(${regEx})`);
const match = latl.match(newRegEx) || null;
if (match) {
const newTokens = [...tokens, {type, value: match[0].trim()}]
const newLatl = latl.slice(match[0].length ,);
return [newLatl, newTokens]
}
}
throw `Unexpected token at ${latl.split('\n')[0]}`
}
export const tokenize = latl => {
let i = 0;
let tokens = [];
let newLatl = latl.trim();
try {
while(newLatl.length) {
[newLatl, tokens] = getOneToken(newLatl, tokens)
}
return tokens;
}
catch (err) {
return {errors: 'tokenization error', message: err, newLatl}
}
}
const parseLineBreak = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
if (!lastNode) return tree;
switch (lastNode.type) {
case 'rule': {
if (tree[tree.length - 2].type === 'ruleSet') {
const ruleValue = lastNode.value;
tree[tree.length - 2].value.push(ruleValue);
tree.pop()
return tree;
}
if (tree[tree.length - 2].type === 'epoch') {
const newNode = { type: 'ruleSet', value: [ lastNode.value ] }
tree[tree.length - 1] = newNode;
return tree;
}
}
case 'feature--plus': {
// tree[tree.length - 1].type === 'feature';
return tree;
}
case 'feature--minus': {
// tree[tree.length - 1].type === 'feature';
return tree;
}
default:
return tree;
}
}
const parseWhiteSpace = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'rule': {
tree[tree.length - 1] = {...lastNode, value: lastNode.value + ' ' }
return tree;
}
default:
return tree;
}
}
const parseStar = (tree, token, index, tokens) => {
const nextToken = tokens[index + 1];
if (nextToken.type === 'referent') {
return [...tree, { type: 'epoch-parent' }]
}
}
const parsePipe = (tree, token, index, tokens) => {
const nextToken = tokens[index + 1];
if (nextToken.type === 'referent') {
const ruleToken = tree[tree.length - 1];
const epochToken = tree[tree.length - 2];
if (ruleToken.type === 'rule' || ruleToken.type === 'ruleSet') {
if (epochToken.type === 'epoch') {
tree[tree.length - 2] = {
...epochToken,
changes: [...ruleToken.value],
type: 'epoch-name'
}
tree.pop();
return tree;
}
}
}
return [...tree, 'unexpected pipe']
}
const parseReferent = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'epoch-parent': {
tree[tree.length - 1] = {...lastNode, parent: token.value, type: 'epoch' }
return tree;
}
case 'epoch-name': {
tree[tree.length - 1] = {...lastNode, name: token.value, type: 'epoch' }
return [...tree, { type: 'main'}];
}
case 'epoch': {
return [...tree, { type: 'rule', value: token.value } ]
}
case 'rule': {
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value }
return tree;
}
case 'ruleSet': {
return [...tree, { type: 'rule', value: token.value }]
}
case 'feature': {
if (!lastNode.value) {
tree[tree.length - 1].value = token.value;
return tree;
}
}
case 'feature--plus': {
if (lastNode.value) {
lastNode.positivePhones = [...lastNode.positivePhones, token.value ]
}
else {
lastNode.value = token.value;
}
tree[tree.length - 1] = lastNode;
return [...tree]
}
case 'feature--minus': {
if (lastNode.value) {
lastNode.negativePhones = [...lastNode.negativePhones, token.value ]
}
else {
lastNode.value = token.value;
}
tree[tree.length - 1] = lastNode;
return [...tree]
}
case 'lexicon': {
if (!lastNode.epoch) {
tree[tree.length - 1].epoch = token.value;
}
else {
tree[tree.length - 1].value.push(token.value)
}
return tree;
}
default:
return [...tree, `unexpected referent ${token.value}`]
}
}
const parsePhone = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch(lastNode.type) {
case 'rule': {
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value }
return tree;
}
case 'ruleSet': {
return [...tree, { type: 'rule', value: token.value }]
}
case 'feature--plus':
lastNode.positivePhones = [...lastNode.positivePhones, token.value ];
tree[tree.length - 1] = lastNode;
return tree;
case 'feature--minus':
lastNode.negativePhones = [...lastNode.negativePhones, token.value ];
tree[tree.length - 1] = lastNode;
return tree;
default:
return [...tree, `unexpected phone ${token.value}`]
}
}
const parseOpenBracket = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
if (lastNode) {
switch (lastNode.type) {
case 'epoch':
return [...tree, {type: 'rule', value: token.value}]
case 'rule':
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value }
return tree;
case 'ruleSet':
return [...tree, {type: 'rule', value: token.value}];
// case 'feature':
// return [{type: 'feature', positivePhones: [], negativePhones: []}];
case 'feature--plus':
return [...tree, {type: 'feature', positivePhones: [], negativePhones: []}];
case 'feature--minus':
return [...tree, {type: 'feature', positivePhones: [], negativePhones: []}];
case 'main':
return [...tree, {type: 'feature', positivePhones: [], negativePhones: []}];
default:
return [...tree, 'unexpected open bracket']
}
}
return [{type: 'feature', positivePhones: [], negativePhones: []}]
}
const parseCloseBracket = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'rule':
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value }
return tree;
case 'feature--plus':
return tree;
case 'feature--minus':
return tree;
default:
return [...tree, 'unexpected close bracket']
}
}
const parsePositiveAssignment = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'feature':
tree[tree.length - 1].type = 'feature--plus'
return tree;
default:
return [...tree, 'unexpected positive assignment']
}
}
const parseNegativeAssignment = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'feature':
tree[tree.length - 1].type = 'feature--minus'
return tree;
case 'feature--plus':
tree[tree.length - 1].type = 'feature--minus';
return tree;
default:
return [...tree, 'unexpected negative assignment']
}
}
const parsePlus = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'rule':
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value}
return tree;
case 'feature':
tree[tree.length - 1] = {...lastNode, type: 'feature--plus'}
return tree;
case 'feature--minus':
tree[tree.length - 1] = {...lastNode, type: 'feature--minus'}
return tree;
default:
return [...tree, 'unexpected plus']
}
}
const parseMinus = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'rule':
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value}
return tree;
case 'feature':
tree[tree.length - 1] = {...lastNode, type: 'feature--minus'}
return tree;
default:
return [...tree, 'unexpected minus']
}
}
const parseEqual = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'feature--plus':
return tree;
case 'feature--minus':
return tree;
default:
return [...tree, 'unexpected equal'];
}
}
const parseGreaterThan = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'rule':
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value}
return tree;
default:
return [...tree, 'unexpected greater than']
}
}
const parseSlash = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
if (lastNode) {
switch (lastNode.type) {
case 'rule':
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value}
return tree;
case 'feature--plus':
return tree;
case 'feature--minus':
return tree;
case 'lexicon':
return [...tree, { }];
case 'main':
return [...tree, { type: 'lexicon', value: []}]
default:
return [...tree, 'unexpected slash']
}
}
return [...tree, { type: 'lexicon', value: []}]
}
const parseHash = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'rule':
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value}
return tree;
default:
return [...tree, 'unexpected hash']
}
}
const parseDot = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'rule':
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value}
return tree;
default:
return [...tree, 'unexpected dot']
}
}
const parseUnderScore = (tree, token, index, tokens) => {
const lastNode = tree[tree.length - 1];
switch (lastNode.type) {
case 'rule':
tree[tree.length - 1] = {...lastNode, value: lastNode.value + token.value}
return tree;
default:
return [...tree, 'unexpected underscore']
}
}
const generateNode = (tree, token, index, tokens) => {
switch (token.type) {
// if comment, consume without effect
case 'semicolon':
return [...tree]
case 'lineBreak':
return parseLineBreak(tree, token, index, tokens);
case 'whiteSpace':
return parseWhiteSpace(tree, token, index, tokens);
// if *PROTO consume token:* and add epochs: [ { parent: 'PROTO' } ]
case 'star':
return parseStar(tree, token, index, tokens);
case 'pipe':
return parsePipe(tree, token, index, tokens);
case 'referent':
return parseReferent(tree, token, index, tokens);
case 'phone':
return parsePhone(tree, token, index, tokens);
case 'openBracket':
return parseOpenBracket(tree, token, index, tokens);
case 'closeBracket':
return parseCloseBracket(tree, token, index, tokens);
case 'positiveAssignment':
return parsePositiveAssignment(tree, token, index, tokens);
case 'negativeAssignment':
return parseNegativeAssignment(tree, token, index, tokens);
case 'plus':
return parsePlus(tree, token, index, tokens);
case 'minus':
return parseMinus(tree, token, index, tokens);
case 'equal':
return parseEqual(tree, token, index, tokens);
case 'greaterThan':
return parseGreaterThan(tree, token, index, tokens);
case 'slash':
return parseSlash(tree, token, index, tokens);
case 'hash':
return parseHash(tree, token, index, tokens);
case 'dot':
return parseDot(tree, token, index, tokens);
case 'underscore':
return parseUnderScore(tree, token, index, tokens);
default:
return [...tree, { ...token }]
}
}
const addToken = (tree, token, index, tokens) => generateNode(tree, token, index, tokens);
const connectNodes = (tree, node, index, nodes) => {
switch (node.type) {
case 'epoch':
delete node.type;
return {...tree, epochs: [...tree.epochs, {...node, index: tree.epochs.length } ] }
case 'feature':
node.feature = node.value;
delete node.value;
delete node.type;
return {...tree, features: [...tree.features, {...node } ] }
case 'feature--minus':
node.feature = node.value;
delete node.value;
delete node.type;
if (tree.features.length && tree.features[tree.features.length - 1].feature === node.feature) {
tree.features[tree.features.length - 1].negativePhones = node.negativePhones
return tree;
}
return {...tree, features: [...tree.features, {...node} ] }
case 'feature--plus':
delete node.type;
node.feature = node.value;
delete node.value;
if (tree.features.length && tree.features[tree.features.length - 1].feature === node.feature) {
tree.features[tree.features.length - 1].positivePhones = node.positivePhones
return tree;
}
return {...tree, features: [...tree.features, {...node} ] }
case 'lexicon':
delete node.type;
return {...tree, lexicon: [...tree.lexicon, node]}
default:
return tree;
}
}
export const buildTree = tokens => {
const bareTree = {
epochs: [],
features: [],
lexicon: []
}
const nodes = tokens.reduce(addToken, []);
// return nodes
const tree = nodes.reduce(connectNodes, bareTree);
const filterProps = Object.entries(tree).filter(([key, value]) => !value.length)
.map(([key, value]) => key)
return filterProps.reduce((tree, badProp) => {
delete tree[badProp];
return tree;
}, tree);
}
export const generateAST = latl => {
// tokenize
const tokens = tokenize(latl.trim());
// build tree
const tree = buildTree(tokens);
return tree;
}
export const parseLatl = (state, action) => {
try {
const latl = state.latl;
const AST = generateAST(latl);
const features = AST.features;
if (features) {
if (state.features) {
state = Object.keys(state.features).reduce((state, feature) => {
return stateReducer(state, {type: 'DELETE_FEATURE', value: feature})
}, state)
}
state = features.reduce((state, feature) => stateReducer(state, {type:'ADD_FEATURE', value: feature}), state);
}
delete AST.features;
const lexicon = AST.lexicon;
if (lexicon) {
if (state.lexicon) {
state.lexicon = [];
}
state = lexicon.reduce((state, epoch) => {
return epoch.value.reduce((reducedState, lexeme) => {
return stateReducer(reducedState, {type: 'ADD_LEXEME', value: { lexeme, epoch: epoch.epoch }})
}, state)
}, state)
}
delete AST.lexicon;
Object.entries(AST).forEach(([key, value]) => state[key] = value);
return { ...state, parseResults: 'latl parsed successfully', results:[] }
}
catch (e) {
console.log(e)
return { ...state, parseResults: 'error parsing', errors: e}
}
}
const tokenTypes = [
['semicolon', ';.*\n'],
[`star`, `\\*`],
['pipe', `\\|`],
['openBracket', `\\[`],
['closeBracket', `\\]`],
['positiveAssignment', `\\+=`],
['negativeAssignment', `\\-=`],
['plus', `\\+`],
['minus', `\\-`],
['greaterThan', `\\>`],
['hash', `#`],
['slash', `\/`],
['dot', `\\.`],
['underscore', `\\_`],
[`referent`, `[A-Za-z]+[\u00c0-\u03FFA-Za-z0-9\\-\\_]*`],
[`phone`, `[\u00c0-\u03FFA-Za-z0]+`],
['equal', `=`],
[`lineBreak`, `\\n`],
[`whiteSpace`, `\\s+`]
]

View file

@ -1,520 +0,0 @@
import { stateReducer } from './reducer';
import { initState } from './reducer.init';
import { tokenize, buildTree, parseLatl } from './reducer.latl';
describe('LATL', () => {
it('returns state unaltered with no action body', () => {
const state = initState();
const action = {
type: 'SET_LATL',
value: ''
}
const returnedState = stateReducer(state, action)
expect(returnedState).toStrictEqual(state);
})
it('returns tokens from well-formed latl epoch definition', () => {
const tokens = tokenize(epochDefinitionLatl);
expect(tokens).toStrictEqual(tokenizedEpoch)
});
it('returns tokens from well-formed latl feature definition', () => {
const tokens = tokenize(featureDefinitionLatl);
expect(tokens).toStrictEqual(tokenizedFeature);
});
it('returns tokens from well-formed latl lexicon definition', () => {
const tokens = tokenize(lexiconDefinitionLatl);
expect(tokens).toStrictEqual(tokenizedLexicon);
});
it('returns tokens from well-formed latl epoch, feature, and lexicon definitions', () => {
const latl = epochDefinitionLatl + '\n' + featureDefinitionLatl + '\n' + lexiconDefinitionLatl;
const tokens = tokenize(latl);
const lineBreaks = [{ type: 'lineBreak', value: '' },{ type: 'lineBreak', value: '' },{ type: 'lineBreak', value: '' }]
const tokenizedLatl = [...tokenizedEpoch, ...lineBreaks, ...tokenizedFeature, ...lineBreaks, ...tokenizedLexicon];
expect(tokens).toStrictEqual(tokenizedLatl);
});
it('returns AST from well-formed epoch tokens', () => {
const tree = buildTree(tokenizedEpoch);
expect(tree).toStrictEqual(treeEpoch);
})
it('returns AST from well-formed feature tokens', () => {
const tree = buildTree(tokenizedFeature);
expect(tree).toStrictEqual(treeFeature);
})
it('returns AST from well-formed lexicon tokens', () => {
const tree = buildTree(tokenizedLexicon);
expect(tree).toStrictEqual(treeLexicon);
})
it('parse returns state from well-formed feature latl', () => {
const state = initState();
const setAction = {
type: 'SET_LATL',
value: featureDefinitionLatl
}
const latlState = stateReducer(state, setAction);
const parseState = parseLatl(latlState, {});
expect(parseState).toStrictEqual(featureState)
})
it('returns run from well-formed epoch latl', () => {
const state = initState();
const setAction = {
type: 'SET_LATL',
value: runEpochLatl
}
const latlState = stateReducer(state, setAction);
const parseState = parseLatl(latlState, {})
// expect(parseState).toStrictEqual(epochState);
parseState.lexicon[0].epoch = 'PROTO'
const runState = stateReducer(parseState, {type: 'RUN', value:{}})
expect(runState).toStrictEqual({...runState, results: runEpochResults})
})
it('returns state from well-formed lexicon latl', () => {
const state = initState();
const setAction = {
type: 'SET_LATL',
value: lexiconDefinitionLatl
}
const latlState = stateReducer(state, setAction);
const parseState = parseLatl(latlState, {});
expect(parseState).toStrictEqual(lexiconState)
})
// it('returns state from well formed latl', () => {
// const state = initState();
// const setAction = {
// type: 'SET_LATL',
// value: totalLatl
// }
// const latlState = stateReducer(state, setAction);
// const parseState = parseLatl(latlState, {});
// expect(parseState).toStrictEqual(totalLatlState)
// })
})
const epochDefinitionLatl = `
; comment
*PROTO
[+ FEATURE]>[- FEATURE]/._.
n>m/#_.
|CHILD
`
const runEpochLatl = `
; comment
*PROTO
a>u/._.
|epoch-1
`
const runEpochResults = [
{
pass: 'epoch-1',
parent: 'PROTO',
lexicon: [ 'untu', 'unut', 'unət', 'unnu', 'tun', 'əntu' ]
}
]
const tokenizedEpoch = [
{ type: "semicolon", value: "; comment" },
{ type: "star", value: "*" }, { type: "referent", value: "PROTO" }, { type: 'lineBreak', value: '' }, { type: "whiteSpace", value: "" },
{ type: "openBracket", value: "[" }, { type: "plus", value: "+" }, { type: "whiteSpace", value: "" }, { type: "referent", value: "FEATURE" }, { type: "closeBracket", value: "]" },
{ type: "greaterThan", value: ">" }, { type: "openBracket", value: "[" }, { type: "minus", value: "-" }, { type: "whiteSpace", value: "" }, { type: "referent", value: "FEATURE" }, { type: "closeBracket", value: "]" },
{ type: "slash", value: "/" }, { type: "dot", value: "." },
{ type: "underscore", value: "_" }, { type: "dot", value: "." }, { type: 'lineBreak', value: '' }, { type: "whiteSpace", value: "" },
{ type: "referent", value: "n" },
{ type: "greaterThan", value: ">" }, { type: "referent", value: "m" },
{ type: "slash", value: "/" }, { type: "hash", value: "#" },
{ type: "underscore", value: "_" }, { type: "dot", value: "." }, { type: 'lineBreak', value: '' },
{ type: "pipe", value: "|" }, { type: "referent", value: "CHILD" }
]
const treeEpoch = {
epochs: [
{
parent: 'PROTO',
name: 'CHILD',
index: 0,
changes: [
'[+ FEATURE]>[- FEATURE]/._.',
'n>m/#_.'
]
}
]
}
const epochState = {
...initState(),
epochs: treeEpoch.epochs,
latl: epochDefinitionLatl
}
const featureDefinitionLatl = `
[+ PLOSIVE] = kp/p/b/d/t/g/k
[- PLOSIVE] = m/n/s/z
[SONORANT
+= m/n
-= s/z/kp/p/b/d/t/g/k
]
`
const tokenizedFeature = [
{type: "openBracket", value: "[" }, { type: "plus", value: "+" }, { type: "whiteSpace", value: "" }, { type: "referent", value: "PLOSIVE" }, { type: "closeBracket", value: "]" }, { type: "whiteSpace", value: "" },
{ type: "equal", value: "=" }, { type: "whiteSpace", value: "" }, { type: "referent", value: "kp" }, { type: "slash", value: "/" }, { type: "referent", value: "p" }, { type: "slash", value: "/" }, { type: "referent", value: "b" }, { type: "slash", value: "/" }, { type: "referent", value: "d" }, { type: "slash", value: "/" }, { type: "referent", value: "t" }, { type: "slash", value: "/" }, { type: "referent", value: "g" }, { type: "slash", value: "/" }, { type: "referent", value: "k" }, { type: 'lineBreak', value: '' },
{type: "openBracket", value: "[" }, { type: "minus", value: "-" }, { type: "whiteSpace", value: "" }, { type: "referent", value: "PLOSIVE" }, { type: "closeBracket", value: "]" }, { type: "whiteSpace", value: "" },
{ type: "equal", value: "=" }, { type: "whiteSpace", value: "" }, { type: "referent", value: "m" }, { type: "slash", value: "/" }, { type: "referent", value: "n" }, { type: "slash", value: "/" }, { type: "referent", value: "s" }, { type: "slash", value: "/" }, { type: "referent", value: "z" }, { type: 'lineBreak', value: '' },
{type: "openBracket", value: "[" }, { type: "referent", value: "SONORANT" }, { type: 'lineBreak', value: '' },
{ type: "whiteSpace", value: "" }, { type: "positiveAssignment", value: "+=" }, { type: "whiteSpace", value: "" },
{ type: "referent", value: "m" }, { type: "slash", value: "/" }, { type: "referent", value: "n" }, { type: 'lineBreak', value: '' },
{ type: "whiteSpace", value: "" }, { type: "negativeAssignment", value: "-=" }, { type: "whiteSpace", value: "" },
{ type: "referent", value: "s" }, { type: "slash", value: "/" }, { type: "referent", value: "z" }, { type: "slash", value: "/" }, { type: "referent", value: "kp" }, { type: "slash", value: "/" }, { type: "referent", value: "p" }, { type: "slash", value: "/" }, { type: "referent", value: "b" }, { type: "slash", value: "/" }, { type: "referent", value: "d" }, { type: "slash", value: "/" }, { type: "referent", value: "t" }, { type: "slash", value: "/" }, { type: "referent", value: "g" }, { type: "slash", value: "/" }, { type: "referent", value: "k" }, { type: 'lineBreak', value: '' },
{ type: "closeBracket", value: "]" },
]
const treeFeature = { features: [
{
feature: 'PLOSIVE',
positivePhones: ['kp', 'p', 'b', 'd', 't', 'g', 'k'],
negativePhones: ['m', 'n', 's', 'z']
},
{
feature: 'SONORANT',
positivePhones: ['m', 'n'],
negativePhones: ['s' ,'z' ,'kp' ,'p' ,'b' ,'d' ,'t' ,'g' ,'k']
}
]}
const featureState = {
...initState(),
features: {
PLOSIVE: {
negative: [
{
features: {
PLOSIVE: false,
SONORANT: true,
},
grapheme: "m",
},
{
features: {
PLOSIVE: false,
SONORANT: true,
},
grapheme: "n",
},
{
features: {
PLOSIVE: false,
SONORANT: false,
},
grapheme: "s",
},
{
features: {
PLOSIVE: false,
SONORANT: false,
},
grapheme: "z",
},
],
positive: [
{
features: {
PLOSIVE: true,
},
grapheme: "kp",
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "p",
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "b",
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "d",
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "t",
ʰ: {
features: {},
grapheme: "tʰ",
},
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "g",
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "k",
p: {
features: {
SONORANT: false,
},
grapheme: "kp",
},
},
],
},
SONORANT: {
negative: [
{
features: {
PLOSIVE: false,
SONORANT: false,
},
grapheme: "s",
},
{
features: {
PLOSIVE: false,
SONORANT: false,
},
grapheme: "z",
},
{
features: {
SONORANT: false,
},
grapheme: "kp",
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "p",
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "b",
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "d",
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "t",
ʰ: {
features: {},
grapheme: "tʰ",
},
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "g",
},
{
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "k",
p: {
features: {
SONORANT: false,
},
grapheme: "kp",
},
},
],
positive: [
{
features: {
PLOSIVE: false,
SONORANT: true,
},
grapheme: "m",
},
{
features: {
PLOSIVE: false,
SONORANT: true,
},
grapheme: "n",
},
],
}, },
parseResults: 'latl parsed successfully',
latl: featureDefinitionLatl,
phones: {
a: {
features: {},
grapheme: "a",
},
b: {
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "b",
},
d: {
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "d",
},
g: {
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "g",
},
k: {
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "k",
p: {
features: {
SONORANT: false,
},
grapheme: "kp",
},
},
m: {
features: {
PLOSIVE: false,
SONORANT: true,
},
grapheme: "m",
},
n: {
features: {
PLOSIVE: false,
SONORANT: true,
},
grapheme: "n",
},
p: {
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "p",
},
s: {
features: {
PLOSIVE: false,
SONORANT: false,
},
grapheme: "s",
},
t: {
features: {
PLOSIVE: true,
SONORANT: false,
},
grapheme: "t",
ʰ: {
features: {},
grapheme: "tʰ",
},
},
u: {
features: {},
grapheme: "u",
},
z: {
features: {
PLOSIVE: false,
SONORANT: false,
},
grapheme: "z",
},
ə: {
features: {},
grapheme: "ə",
},
ɯ: {
features: {},
grapheme: "ɯ",
},
}
}
const lexiconDefinitionLatl = `
/PROTO
kpn
sm
/
`
const tokenizedLexicon = [
{ type: "slash", value: "/" }, { type: "referent", value: "PROTO" }, { type: 'lineBreak', value: '' },
{ type: "whiteSpace", value:"" }, { type: "referent", value: "kpn" }, { type: 'lineBreak', value: '' },
{ type: "whiteSpace", value:"" }, { type: "referent", value: "sm" }, { type: 'lineBreak', value: '' },
{ type: "slash", value: "/" }
]
const treeLexicon = {lexicon: [{epoch: "PROTO", value: ["kpn", "sm"]}]};
const lexiconState = {
...initState(),
latl: lexiconDefinitionLatl,
lexicon: [
{ lexeme: 'kpn', epoch: 'PROTO'},
{ lexeme: 'sm', epoch: 'PROTO'}
],
parseResults: 'latl parsed successfully'
}
const totalLatl = `${epochDefinitionLatl}\n\n${featureDefinitionLatl}\n\n${lexiconDefinitionLatl}`
const totalLatlState = {
...initState(),
latl: totalLatl,
phonemes: {},
features: featureState.features,
epochs: treeEpoch.epochs,
lexicon: lexiconState.lexicon,
parseResults: 'latl parsed successfully'
}

View file

@ -1,43 +0,0 @@
// @flow
import type { stateType } from './reducer';
type lexemeType = {
lexeme: string,
epoch?: string
}
type addLexemeAction = {
type: 'ADD_LEXEME',
value: lexemeType
}
type setLexiconAction = {
type: 'SET_LEXICON',
value: Array<lexemeType>
}
const makeLexeme = (lexeme: string, epochName: ?string, state: stateType) => {
const newLexeme = {lexeme: lexeme, epoch: state.epochs[0]};
if (epochName) {
const epochIndex = state.epochs.findIndex(epoch => epoch.name === epochName);
if (epochIndex > -1) {
newLexeme.epoch = state.epochs[epochIndex];
} else {
newLexeme.epoch = epochName;
};
}
return newLexeme;
}
export type lexiconAction = addLexemeAction | setLexiconAction
export const addLexeme = (state: stateType, action: addLexemeAction): stateType => {
const newLexeme = makeLexeme(action.value.lexeme, action.value.epoch, state);
return {...state, lexicon:[...state.lexicon, newLexeme]}
}
export const setLexicon = (state: stateType, action: setLexiconAction): stateType => {
let newLexicon = action.value;
newLexicon = newLexicon.map(lexeme => makeLexeme(lexeme.lexeme, lexeme.epoch, state));
return {...state, lexicon: newLexicon}
}

View file

@ -1,67 +0,0 @@
import {stateReducer} from './reducer';
describe('Lexicon', () => {
const state = {
epochs: [
{ name: 'epoch-1', changes:[''] },
{ name: 'epoch-2', changes:[''] }
]
}
state.lexicon = [
{lexeme:'anta', epoch:state.epochs[0]},
{lexeme:'anat', epoch:state.epochs[0]},
{lexeme:'anət', epoch:state.epochs[0]},
{lexeme:'anna', epoch:state.epochs[0]},
{lexeme:'tan', epoch:state.epochs[0]},
{lexeme:'ənta', epoch:state.epochs[0]}
]
;
it('lexicon returned unaltered', () => {
const action = {type: ''};
expect(stateReducer(state, action)).toBe(state);
});
it('lexicon addition without epoch returns updated lexicon with default epoch', () => {
const action = {type: 'ADD_LEXEME', value: {lexeme:'ntʰa'}}
expect(stateReducer(state, action)).toEqual({...state, lexicon:[...state.lexicon, {lexeme:'ntʰa', epoch:state.epochs[0]}]});
});
it('lexicon addition with epoch returns updated lexicon with correct epoch', () => {
const action = {type: 'ADD_LEXEME', value: {lexeme:'ntʰa', epoch: 'epoch-2'}}
expect(stateReducer(state, action)).toEqual({...state, lexicon:[...state.lexicon, {lexeme:'ntʰa', epoch:state.epochs[1]}]});
});
it('lexicon set returns updated lexicon with correct epoch', () => {
const newLexicon = [
{lexeme:'anta', epoch:'epoch-1'},
{lexeme:'anat', epoch:'epoch-1'},
{lexeme:'anət', epoch:'epoch-1'},
{lexeme:'anna', epoch:'epoch-1'}
]
const action = {type: 'SET_LEXICON', value: newLexicon}
expect(stateReducer(state, action)).toEqual({...state, lexicon:[
{lexeme:'anta', epoch:state.epochs[0]},
{lexeme:'anat', epoch:state.epochs[0]},
{lexeme:'anət', epoch:state.epochs[0]},
{lexeme:'anna', epoch:state.epochs[0]}
]});
});
it('lexicon set with no epoch returns updated lexicon with defaul epoch', () => {
const newLexicon = [
{lexeme:'anta', epoch:state.epochs[0]},
{lexeme:'anat', epoch:state.epochs[0]},
{lexeme:'anət', epoch:state.epochs[1]},
{lexeme:'anna', epoch:state.epochs[0]}
]
const inputLexicon = [
{lexeme:'anta'},
{lexeme:'anat'},
{lexeme:'anət', epoch:'epoch-2'},
{lexeme:'anna'}
]
const action = {type: 'SET_LEXICON', value: inputLexicon}
expect(stateReducer(state, action)).toEqual({...state, lexicon:newLexicon});
})
});

View file

@ -1,20 +0,0 @@
// @flow
import type { stateType } from './reducer';
export type optionAction = {
type: 'SET_OPTIONS',
value: {
option: string,
setValue: string
}
};
export const setOptions = (state: stateType, action: optionAction): stateType => {
const option = action.value.option;
let value = action.value.setValue;
if (value === 'true') value = true;
if (value === 'false') value = false;
const mutatedState = {...state};
mutatedState.options[option] = value;
return mutatedState;
}

View file

@ -1,38 +0,0 @@
import { stateReducer } from './reducer';
import { initState } from './reducer.init';
describe('Options', () => {
let state = {}
beforeEach(() => {
state = initState();
});
it('Options returned unaltered', () => {
const action = {type: ''};
expect(stateReducer(state, action)).toBe(state);
});
// output: 'default', save: false
it('Options change to output returns with changed value', () => {
const action = {type: 'SET_OPTIONS', value: {option: 'output', setValue: 'proto'}};
expect(stateReducer(state, action)).toEqual(
{...state,
options: {...state.options,
output: 'proto'
}
}
);
});
it('Options change to save returns with changed value', () => {
const action = {type: 'SET_OPTIONS', value: {option: 'save', setValue: 'true'}};
expect(stateReducer(state, action)).toEqual(
{...state,
options: {...state.options,
save: true
}
}
);
});
});

View file

@ -1,73 +0,0 @@
import {stateReducer} from './reducer';
describe('Phones', () => {
const n_phone = {features: {nasal: true}, grapheme: 'n'};
const state = {};
beforeEach(()=> {
state.phones= { n: n_phone };
state.features = {
nasal: {
positive: [state.phones.n],
negative: []
}
};
})
it('phones returned unaltered', () => {
const action = {type: ''};
expect(stateReducer(state, action)).toBe(state);
});
it('feature addition returns new feature list with positive phones updated', () => {
const action = {type: 'ADD_FEATURE', value: {feature: 'anterior', positivePhones: ['n']}};
expect(stateReducer(state, action)).toEqual(
{...state,
features:{...state.features, anterior: { positive: [state.phones.n], negative: [] }},
phones:{...state.phones, n:{...state.phones.n, features: {...state.phones.n.features, anterior: true}}}
}
)
})
it('feature addition returns new feature list with negative phones update', () => {
const action = {type: 'ADD_FEATURE', value: {feature: 'sonorant', negativePhones: ['t']}};
expect(stateReducer(state, action)).toEqual(
{...state,
features:{...state.features, sonorant: { positive: [], negative: [state.phones.t] }},
phones:{...state.phones, t:{features:{sonorant: false}, grapheme: 't'}}
}
);
});
it('feature addition returns new feature list with positive and negative phones update', () => {
const action = {type: 'ADD_FEATURE', value: {feature: 'sonorant', positivePhones: ['n'], negativePhones: ['t']}};
expect(stateReducer(state, action)).toEqual(
{...state,
features:{...state.features, sonorant: { positive: [state.phones.n], negative: [state.phones.t] }},
phones:{...state.phones,
t:{features:{sonorant: false}, grapheme: 't'},
n:{...state.phones.n, features: {...state.phones.n.features, sonorant: true}}
}
}
);
});
it('feature addition returns new feature list with multi-graph phones updated', () => {
const action = {type: 'ADD_FEATURE', value: {feature: 'aspirated', positivePhones: ['ntʰ'], negativePhones: ['n','t']}};
expect(stateReducer(state, action)).toEqual(
{...state,
features:{...state.features,
aspirated: {
positive: [state.phones.n.t.ʰ],
negative: [state.phones.n, state.phones.t]
}
},
phones:{...state.phones,
t:{features:{aspirated: false}, grapheme: 't'},
n:{...state.phones.n, features: {...state.phones.n.features, aspirated: false},
t: {ʰ:{features:{aspirated:true}, grapheme: 'ntʰ'}}
}
}
}
);
});
});

View file

@ -1,293 +0,0 @@
// @flow
import type { stateType, epochType, phoneType } from './reducer';
export type resultsAction = {
type: 'RUN'
}
export type decomposedRulesType = [
{
environment: {
pre: [{[key: string]: boolean}],
position: [{[key: string]: boolean}],
post: [{[key: string]: boolean}]
},
newFeatures: [{[key: string]: boolean}]
}
]
type ruleBundle = {
environment: {
pre: string,
position: string,
post: string
},
newFeatures: string
}
const getProperty = property => object => object[property]
const findFeaturesFromLexeme = (phones: {}, lexeme:string): [] => {
let featureBundle = []
let lastIndex = lexeme.length - 1;
let node = {};
[...lexeme].forEach((graph, index) => {
try {
if (!index ) return node = phones[graph]
if (index === lastIndex) return node[graph]
? featureBundle.push(node[graph])
: featureBundle.push(node, phones[graph])
if (!node[graph] && node.features) {
featureBundle.push(node)
return node = phones[graph]
}
if (!node) return node = phones[graph]
return node = node[graph]
}
catch (e) {
throw {e, 'phones[graph]':phones[graph], index, lexeme }
}
})
return featureBundle;
}
const findFeaturesFromGrapheme = (phones: {}, lexeme:string): [] => {
let featureBundle = []
let lastIndex = lexeme.length - 1;
let node = {};
[...lexeme].forEach((graph, index) => {
if (!index && !lastIndex) featureBundle.push(phones[graph].features)
if (!index) return node = phones[graph]
if (index === lastIndex) return node[graph]
? featureBundle.push(node[graph])
: featureBundle.push(node, phones[graph])
if (!node[graph] && node.features) {
featureBundle.push(node)
return node = phones[graph]
}
if (!node[graph])
return node = node[graph]
})
return featureBundle;
}
const errorMessage = ([prefix, separator], location, err) => `${prefix}${location}${separator}${err}`
const lintRule = (rule) => {
if (!rule.match(/>/g)) throw `Insert '>' operator between target and result`
if (!rule.match(/\//g)) throw `Insert '/' operator between change and environment`
if (!rule.match(/_/g)) throw `Insert '_' operator in environment`
if (rule.match(/>/g).length > 1) throw `Too many '>' operators`
if (rule.match(/\//g).length > 1) throw `Too many '/' operators`
if (rule.match(/_/g).length > 1) throw `Too many '_' operators`
return rule.split(/>|\/|_/g);
}
const decomposeRule = (rule: string, index: number): ruleBundle => {
try {
// splits rule at '>' '/' and '_' substrings resulting in array of length 4
const [position, newFeatures, pre, post] = lintRule(rule);
return { environment: { pre, position, post }, newFeatures }
} catch (err) {
throw errorMessage`Error in line ${index + 1}: ${err}`;
}
}
const isUnknownFeatureToken = token => token !== '-' && token !== '+' && token !== ']' && token !== '[' && token !== ' ';
const doesFeatureRuleContainUnknownToken = features => {
const unknownTokens = features
.match(/\W/g)
.filter(isUnknownFeatureToken)
if (unknownTokens.length) throw `Unknown token '${unknownTokens[0]}'`;
return true
}
const reduceFeaturesToBoolean = bool => (map, feature) => ({...map, [feature]: bool})
const getFeatures = (phoneme: string, featureBoolean): {} => {
try {
const featureMatch = featureBoolean
// regEx to pull positive features
? /(?=\+.).*(?<=\-)|(?=\+.).*(?!\-).*(?<=\])/g
// regEx to pull negative features
: /(?=\-.).*(?<=\+)|(?=\-.).*(?!\+).*(?<=\])/g
const [ features ] = phoneme.match(featureMatch) || [ null ];
if (features) {
doesFeatureRuleContainUnknownToken(features)
return features
.trim()
.match(/\w+/g)
.reduce(reduceFeaturesToBoolean(featureBoolean), {})
}
return {}
} catch (err) {
throw err;
}
}
const mapToPositiveAndNegativeFeatures = phoneme => (
{ ...getFeatures(phoneme, true), ...getFeatures(phoneme, false) } )
const mapStringToFeatures = (ruleString, phones) => {
if (ruleString) {
if (ruleString === '.') return [];
if (ruleString === '#') return ['#']
if (ruleString === '0') return [];
const ruleBrackets = ruleString.match(/\[.*\]/)
try {
if (ruleBrackets) {
return ruleString
.split('[')
// filter out empty strings
.filter(v => v)
.map(mapToPositiveAndNegativeFeatures)
}
return findFeaturesFromGrapheme(phones, ruleString);
} catch (err) {
throw err;
}
}
return {};
}
const mapRuleBundleToFeatureBundle = phones => ( ruleBundle, index ) => {
// for each object in ruleBundle, map values to array of objects with feature-boolean key-value pairs
try {
const { newFeatures, environment:{ pre, position, post } } = ruleBundle;
return {
environment: {
pre: mapStringToFeatures(pre, phones),
position: mapStringToFeatures(position, phones),
post: mapStringToFeatures(post, phones),
},
newFeatures: mapStringToFeatures(newFeatures, phones)
}
} catch (err) {
throw errorMessage`Error in line ${index + 1}: ${err}`;
}
}
export const decomposeRules = (epoch: epochType, phones: {[key: string]: phoneType}): decomposedRulesType => {
const { changes } = epoch
try {
return changes
.map(decomposeRule)
.map(mapRuleBundleToFeatureBundle(phones));
} catch (err) {
const ruleError = {epoch: epoch.name, error: err}
throw ruleError;
}
}
const isPhonemeBoundByRule = phonemeFeatures => (ruleFeature, index) => {
const phoneme = phonemeFeatures[index].features;
return Object.entries(ruleFeature).reduce((bool, [feature, value]) => {
if (!bool) return false;
if (!phoneme.hasOwnProperty(feature)) return false;
if (!phoneme[feature] && !value) return true;
if (phoneme[feature] !== value) return false;
return true;
}, true);
}
const isEnvironmentBoundByRule = (phonemeFeatures, ruleFeatures) => {
if (!ruleFeatures) return true;
return ruleFeatures.filter(isPhonemeBoundByRule(phonemeFeatures)).length === ruleFeatures.length;
}
const getEntries = object => Object.entries(object);
const isObjectWithPropertyInArray = (array, property) => candidate => array.map(getProperty(property)).includes(candidate[property]);
const transformFeatureValues = features => ([newFeature, newValue]) => features[newFeature][newValue ? 'positive': 'negative'];
const reduceFeatureValues = (newPhoneme, [newFeature, newValue]) => ({ ...newPhoneme, [newFeature]: newValue })
const transformPhoneme = (phoneme, newFeatures, features) => {
if (!newFeatures) return {}
const newPhonemeFeatures = getEntries(newFeatures).reduce(reduceFeatureValues, {...phoneme.features});
const newPhonemeCandidates = getEntries(newPhonemeFeatures).map(transformFeatureValues(features));
return newPhonemeCandidates
.reduce((candidates, candidatesSubset, index, array) => candidates.filter(isObjectWithPropertyInArray(candidatesSubset, 'grapheme'))
, newPhonemeCandidates[newPhonemeCandidates.length - 1])[0];
}
const transformLexemeInitial = (newLexeme, pre, post, position, phoneme, index, lexemeBundle, newFeatures, features) => {
if (index !== pre.length - 1) return [...newLexeme, phoneme];
if (!isEnvironmentBoundByRule([phoneme], position)) return [...newLexeme, phoneme];
if (!isEnvironmentBoundByRule(lexemeBundle.slice(index + position.length, index + post.length + position.length), post)) return [...newLexeme, phoneme];
const newPhoneme = transformPhoneme(phoneme, newFeatures[0], features);
// if deletion occurs
if (!newPhoneme || !newPhoneme.grapheme) return [ ...newLexeme] ;
return [...newLexeme, newPhoneme];
}
const transformLexemeCoda = (newLexeme, pre, post, position, phoneme, index, lexemeBundle, newFeatures, features) => {
if (index + post.length !== lexemeBundle.length) return [...newLexeme, phoneme];
if (!isEnvironmentBoundByRule(lexemeBundle.slice(index - pre.length, index), pre)) return [...newLexeme, phoneme];
if (!isEnvironmentBoundByRule([phoneme], position)) return [...newLexeme, phoneme];
const newPhoneme = transformPhoneme(phoneme, newFeatures[0], features);
// if deletion occurs
if (!newPhoneme || !newPhoneme.grapheme) return [ ...newLexeme] ;
return [...newLexeme, newPhoneme];
}
export const transformLexeme = (lexemeBundle, rule, features) => {
const {pre, post, position} = rule.environment;
const newLexeme = lexemeBundle.reduce((newLexeme, phoneme, index) => {
if (pre.find(val => val === '#')) return transformLexemeInitial(newLexeme, pre, post, position, phoneme, index, lexemeBundle, rule.newFeatures, features);
if (post.find(val => val === '#')) return transformLexemeCoda(newLexeme, pre, post, position, phoneme, index, lexemeBundle, rule.newFeatures, features);
if ( index < pre.length || index >= lexemeBundle.length - post.length ) return [...newLexeme, phoneme];
if (!isEnvironmentBoundByRule(lexemeBundle.slice(index - pre.length, index), pre)) return [...newLexeme, phoneme];
if (!isEnvironmentBoundByRule([phoneme], position)) return [...newLexeme, phoneme];
if (!isEnvironmentBoundByRule(lexemeBundle.slice(index, index + post.length), post)) return [...newLexeme, phoneme];
const newPhoneme = transformPhoneme(phoneme, rule.newFeatures[0], features);
// if deletion occurs
if (!newPhoneme || !newPhoneme.grapheme) return [ ...newLexeme] ;
return [...newLexeme, newPhoneme];
}, [])
return newLexeme;
}
const formBundleFromLexicon = lexicon => phones => lexicon.map(({lexeme}) => findFeaturesFromLexeme(phones, lexeme))
const transformLexicon = lexiconBundle =>
ruleBundle =>
features =>
lexiconBundle.map(lexemeBundle => ruleBundle.reduce(
(lexeme, rule, i) => transformLexeme(lexeme, rule, features)
, lexemeBundle
))
const getGraphemeFromEntry = ([_, phoneme]) => phoneme.grapheme
const stringifyLexeme = (lexeme) => lexeme.map(getProperty('grapheme')).join('')
const stringifyResults = ({lexicon, ...passResults}) => ({...passResults, lexicon: lexicon.map(stringifyLexeme)})
export const run = (state: stateType, action: resultsAction): stateType => {
// TODO iterate through each epoch
try {
const passResults = state.epochs.reduce((results, epoch, _) => {
const { phones, features, lexicon } = state;
let lexiconBundle;
if ( epoch.parent ) {
lexiconBundle = results.find(result => result.pass === epoch.parent)
}
if (!lexiconBundle) {
lexiconBundle = formBundleFromLexicon(lexicon)(phones);
}
else {
lexiconBundle = lexiconBundle.lexicon
}
const ruleBundle = decomposeRules(epoch, phones);
const passResults = transformLexicon(lexiconBundle)(ruleBundle)(features)
const pass = { pass: epoch.name, lexicon: passResults }
if ( epoch.parent ) pass.parent = epoch.parent;
return [...results, pass];
}, []);
const results = passResults.map(stringifyResults);
return {...state, results, errors: {}, parseResults: '' }
} catch (err) {
console.log(err)
return {...state, errors: err, results:[], parseResults: '' };
}
}

View file

@ -1,318 +0,0 @@
import { stateReducer } from './reducer';
import { initState } from './reducer.init';
import { decomposeRules, transformLexeme } from './reducer.results';
describe('Results', () => {
let state = {};
beforeEach(()=> {
state = {};
})
it('results returned unaltered', () => {
const action = {type: ''};
expect(stateReducer(state, action)).toBe(state);
});
it('rules decomposed properly', () => {
const { epochs, phones } = initState(1);
const result = getResult();
expect(decomposeRules(epochs[0], phones)).toStrictEqual(result);
});
it('rule without ">" returns helpful error message', () => {
const { phones } = initState();
const epoch = { name: 'error epoch', changes: [ 't/n/_' ] }
const errorMessage = {epoch: 'error epoch', error: "Error in line 1: Insert '>' operator between target and result"};
let receivedError;
try {
decomposeRules(epoch, phones)
}
catch (err) {
receivedError=err;
}
expect(receivedError).toStrictEqual(errorMessage);
})
it('rule with too many ">" returns helpful error message', () => {
const { phones } = initState();
const epoch = { name: 'error epoch', changes: [ 't>n>/_' ] }
const errorMessage = {epoch: 'error epoch', error: "Error in line 1: Too many '>' operators"};
let receivedError;
try {
decomposeRules(epoch, phones)
}
catch (err) {
receivedError=err;
}
expect(receivedError).toStrictEqual(errorMessage);
})
it('rule without "/" returns helpful error message', () => {
const { phones } = initState();
const epoch = { name: 'error epoch', changes: [ 't>n_' ] }
const errorMessage = {epoch: 'error epoch', error: "Error in line 1: Insert '/' operator between change and environment"};
let receivedError;
try {
decomposeRules(epoch, phones)
}
catch (err) {
receivedError=err;
}
expect(receivedError).toStrictEqual(errorMessage);
})
it('rule with too many "/" returns helpful error message', () => {
const { phones } = initState();
const epoch = { name: 'error epoch', changes: [ 't>n/_/' ] }
const errorMessage = {epoch: 'error epoch', error: "Error in line 1: Too many '/' operators"};
let receivedError;
try {
decomposeRules(epoch, phones)
}
catch (err) {
receivedError=err;
}
expect(receivedError).toStrictEqual(errorMessage);
})
it('rule without "_" returns helpful error message', () => {
const { phones } = initState();
const epoch = { name: 'error epoch', changes: [ 't>n/' ] }
const errorMessage = {epoch: 'error epoch', error: "Error in line 1: Insert '_' operator in environment"};
let receivedError;
try {
decomposeRules(epoch, phones)
}
catch (err) {
receivedError=err;
}
expect(receivedError).toStrictEqual(errorMessage);
})
it('rule with too many "_" returns helpful error message', () => {
const { phones } = initState();
const epoch = { name: 'error epoch', changes: [ 't>n/__' ] }
const errorMessage = {epoch: 'error epoch', error: "Error in line 1: Too many '_' operators"};
let receivedError;
try {
decomposeRules(epoch, phones)
}
catch (err) {
receivedError=err;
}
expect(receivedError).toStrictEqual(errorMessage);
})
it('rule with incorrect feature syntax returns helpful error message', () => {
const { phones } = initState();
const epoch = { name: 'error epoch', changes: [ '[+ occlusive - nasal = obstruent]>n/_' ] }
const errorMessage = {epoch: 'error epoch', error: "Error in line 1: Unknown token '='"};
let receivedError;
try {
decomposeRules(epoch, phones)
}
catch (err) {
receivedError=err;
}
expect(receivedError).toStrictEqual(errorMessage);
})
it('expect transform lexeme to apply rule to lexeme', () => {
const lexemeBundle = getlexemeBundle();
const resultsLexeme = [...lexemeBundle]
resultsLexeme[2] = lexemeBundle[1]
const rule = getRule();
expect(transformLexeme(lexemeBundle, rule, initState().features)).toEqual(resultsLexeme)
})
it('results returned from first sound change rule (feature matching)', () => {
const action = {type: 'RUN'};
state = initState(1)
expect(stateReducer(state, action).results).toEqual([
{
pass: 'epoch-1',
lexicon: [
'anna', 'anat', 'anət', 'anna', 'tan', 'ənna'
]
}
]);
});
it('results returned through second sound change rule (phoneme matching)', () => {
const action = {type: 'RUN'};
state = initState(2)
expect(stateReducer(state, action).results).toEqual([
{
pass: 'epoch-1',
lexicon: [
'annɯ', 'anat', 'anət', 'annɯ', 'tan', 'ənnɯ'
]
}
]);
});
it('results returned through third sound change rule (phoneme dropping)', () => {
const action = {type: 'RUN'};
state = initState(3)
expect(stateReducer(state, action).results).toEqual([
{
pass: 'epoch-1',
lexicon: [
'annɯ', 'anat', 'ant', 'annɯ', 'tan', 'nnɯ'
]
}
]);
});
it('results returned through fourth sound change rule (lexeme initial environment)', () => {
const action = {type: 'RUN'};
state = initState(4)
expect(stateReducer(state, action).results).toEqual([
{
pass: 'epoch-1',
lexicon: [
'annɯ', 'anat', 'ant', 'annɯ', 'tʰan', 'nnɯ'
]
}
]);
});
it('results returned through fifth sound change rule (lexeme final environment)', () => {
const action = {type: 'RUN'};
state = initState(5)
expect(stateReducer(state, action).results).toEqual([
{
pass: 'epoch-1',
lexicon: [
'annu', 'anat', 'ant', 'annu', 'tʰan', 'nnu'
]
}
]);
});
// it('results returned through sixth sound change rule (multi-phoneme target)', () => {
// const action = {type: 'RUN'};
// state = initState(6)
// expect(stateReducer(state, action).results).toEqual([
// {
// pass: 'epoch-1',
// lexicon: [
// 'annu', 'anta', 'ant', 'annu', 'tʰan', 'nnu'
// ]
// }
// ]);
// });
it('results returned for multiple epochs without parent epoch', () => {
const action = {type: 'RUN'};
state = initState(5);
const newEpoch = {
name: 'epoch-2',
changes: [
'[+ sonorant ]>0/#_.',
'n>0/#_n'
]
}
state.epochs = [ ...state.epochs, newEpoch ]
expect(stateReducer(state, action).results).toEqual([
{
pass: 'epoch-1',
lexicon: [
'annu', 'anat', 'ant', 'annu', 'tʰan', 'nnu'
]
},
{
pass: 'epoch-2',
lexicon: [
'nta', 'nat', 'nət', 'na', 'tan', 'nta'
]
}
])
})
it('results returned for multiple epochs with parent epoch', () => {
const action = {type: 'RUN'};
state = initState(5);
const newEpoch = {
name: 'epoch-2',
parent: 'epoch-1',
changes: [
'[+ sonorant ]>0/#_.'
]
}
state.epochs = [ ...state.epochs, newEpoch ]
expect(stateReducer(state, action).results).toEqual([
{
pass: 'epoch-1',
lexicon: [
'annu', 'anat', 'ant', 'annu', 'tʰan', 'nnu'
]
},
{
pass: 'epoch-2',
parent: 'epoch-1',
lexicon: [
'nnu', 'nat', 'nt', 'nnu', 'tʰan', 'nu'
]
}
])
})
});
const getlexemeBundle = () => ([
{
grapheme: 'a',
features: {
sonorant: true,
back: true,
low: true,
high: false,
rounded: false
}
},
{
grapheme: 'n',
features: { sonorant: true, nasal: true, occlusive: true, coronal: true }
},
{
grapheme: 't',
features: { occlusive: true, coronal: true, obstruent: true, nasal: false }
},
{
grapheme: 'a',
features: {
sonorant: true,
back: true,
low: true,
high: false,
rounded: false
}
}
])
const getRule = () => ({
environment: {
pre: [ { sonorant: true, nasal: true, occlusive: true, coronal: true } ],
position: [ { occlusive: true, nasal: false } ],
post: []
},
newFeatures: [ { occlusive: true, nasal: true } ]
})
const getResult = () => ([
{
environment: {
pre: [
{
sonorant: true, nasal: true, occlusive: true, coronal: true
}
],
position: [
{occlusive: true, nasal: false}
],
post: [],
},
newFeatures: [{occlusive: true, nasal: true}]
}
]);

View file

@ -1,8 +0,0 @@
import {stateReducer} from './reducer';
it('default returns state unaltered', () => {
const state = {data: 'example'};
const action = {type: ''};
expect(stateReducer(state, action)).toBe(state);
});

View file

@ -1,135 +0,0 @@
// This optional code is used to register a service worker.
// register() is not called by default.
// This lets the app load faster on subsequent visits in production, and gives
// it offline capabilities. However, it also means that developers (and users)
// will only see deployed updates on subsequent visits to a page, after all the
// existing tabs open on the page have been closed, since previously cached
// resources are updated in the background.
// To learn more about the benefits of this model and instructions on how to
// opt-in, read https://bit.ly/CRA-PWA
const isLocalhost = Boolean(
window.location.hostname === 'localhost' ||
// [::1] is the IPv6 localhost address.
window.location.hostname === '[::1]' ||
// 127.0.0.1/8 is considered localhost for IPv4.
window.location.hostname.match(
/^127(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}$/
)
);
export function register(config) {
if (process.env.NODE_ENV === 'production' && 'serviceWorker' in navigator) {
// The URL constructor is available in all browsers that support SW.
const publicUrl = new URL(process.env.PUBLIC_URL, window.location.href);
if (publicUrl.origin !== window.location.origin) {
// Our service worker won't work if PUBLIC_URL is on a different origin
// from what our page is served on. This might happen if a CDN is used to
// serve assets; see https://github.com/facebook/create-react-app/issues/2374
return;
}
window.addEventListener('load', () => {
const swUrl = `${process.env.PUBLIC_URL}/service-worker.js`;
if (isLocalhost) {
// This is running on localhost. Let's check if a service worker still exists or not.
checkValidServiceWorker(swUrl, config);
// Add some additional logging to localhost, pointing developers to the
// service worker/PWA documentation.
navigator.serviceWorker.ready.then(() => {
console.log(
'This web app is being served cache-first by a service ' +
'worker. To learn more, visit https://bit.ly/CRA-PWA'
);
});
} else {
// Is not localhost. Just register service worker
registerValidSW(swUrl, config);
}
});
}
}
function registerValidSW(swUrl, config) {
navigator.serviceWorker
.register(swUrl)
.then(registration => {
registration.onupdatefound = () => {
const installingWorker = registration.installing;
if (installingWorker == null) {
return;
}
installingWorker.onstatechange = () => {
if (installingWorker.state === 'installed') {
if (navigator.serviceWorker.controller) {
// At this point, the updated precached content has been fetched,
// but the previous service worker will still serve the older
// content until all client tabs are closed.
console.log(
'New content is available and will be used when all ' +
'tabs for this page are closed. See https://bit.ly/CRA-PWA.'
);
// Execute callback
if (config && config.onUpdate) {
config.onUpdate(registration);
}
} else {
// At this point, everything has been precached.
// It's the perfect time to display a
// "Content is cached for offline use." message.
console.log('Content is cached for offline use.');
// Execute callback
if (config && config.onSuccess) {
config.onSuccess(registration);
}
}
}
};
};
})
.catch(error => {
console.error('Error during service worker registration:', error);
});
}
function checkValidServiceWorker(swUrl, config) {
// Check if the service worker can be found. If it can't reload the page.
fetch(swUrl)
.then(response => {
// Ensure service worker exists, and that we really are getting a JS file.
const contentType = response.headers.get('content-type');
if (
response.status === 404 ||
(contentType != null && contentType.indexOf('javascript') === -1)
) {
// No service worker found. Probably a different app. Reload the page.
navigator.serviceWorker.ready.then(registration => {
registration.unregister().then(() => {
window.location.reload();
});
});
} else {
// Service worker found. Proceed as normal.
registerValidSW(swUrl, config);
}
})
.catch(() => {
console.log(
'No internet connection found. App is running in offline mode.'
);
});
}
export function unregister() {
if ('serviceWorker' in navigator) {
navigator.serviceWorker.ready.then(registration => {
registration.unregister();
});
}
}

View file

@ -1,75 +0,0 @@
# LATL specification
## Feature Definition
## Rule Definition
ex.
```
(
`Unmotivated A to C`
A -> B / _
A -> C / _
``A becomes C in all environments with a intermediate state of B``
)
```
### Rule Body
#### Sound Definition
#### Change Definition
#### Environment Definition
##### Null Environment
Valid syntaxes:
```
A -> B ; no indicated environment
A -> B / _ ; environment indicated wth underscore
A -> B / . _ . ; environment indicated with underscore and placeholder dots
```
### Rule Metadata
#### Rule Title
#### Rule Description
## Language Primitives
## Data Structures
### Sets
Sets are collections of pointers to phones. The GLOBAL set contains all phones, making all other sets subsets of GLOBAL.
#### Global Set
[ GLOBAL ] is a shorthand for [ GLOBAL.SETS ]
#### Set Definition
Sets are defined with the set keyword followed by an equal sign and a set expression:
```
set SHORT_VOWELS = [ a, i, u ]
```
A single alias can be provided to the set during definition:
```
; the alias N can be used to refer to this set
set NASAL_PULMONIC_CONSONANTS, N = [ m, ɱ, n̼, n, ɳ, ɲ, ŋ, ɴ ]
```
Lists of sets can be defined using a comma followed by whitespace syntax
```
set PLOSIVES = [ p, t, k ],
FRICATIVES = [ f, s, x ],
LABIALIZED_PLOSIVES = { PLOSIVES yield [ X concat ʷ ] }
```
#### Set Usage
#### Set Operations
##### 'and' Operation
##### 'or' Operation
##### 'not' Operation
##### 'nor' Operation
##### 'in' Operation
##### 'yield' Operation
### Lexemes
#### Lexeme Operations
### Phone
For set of phones 'a', 'b', and 'ab':
```
GLOBAL ┬▻ <Key: a> ┬▻ <Key: b> ┬▻ { feature: <Boolean>, ... }
│ │ └▻ grapheme: <String: 'ab'>
│ └┬▻ { feature: <Boolean>, ... }
│ └▻ grapheme: <String: 'a'>
└┬▻ { feature: <Boolean>, ... }
└▻ grapheme: <String: 'b'>
```
#### Phone Operations
### Epochs

View file

@ -1,19 +0,0 @@
import { parser } from './parser';
export const codeGenerator = (latl) => {
const results = parser().feed(latl).results;
const nodeReader = (code, node) => {
if (node.length) {
return results.reduce(nodeReader, code)
}
if (!node) return code;
if (node.main) {
return nodeReader(code, node.main)
}
return code + node;
}
return nodeReader('', results)
}

View file

@ -1,120 +0,0 @@
// Generated automatically by nearley, version 2.19.1
// http://github.com/Hardmath123/nearley
(function () {
function id(x) { return x[0]; }
const { lexer } = require('./lexer.js');
const getTerminal = d => d ? d[0] : null;
const getAll = d => d.map((item, i) => ({ [i]: item }));
const flag = token => d => d.map(item => ({ [token]: item }))
const clearNull = d => d.filter(t => !!t && (t.length !== 1 || t[0])).map(t => t.length ? clearNull(t) : t);
const flagIndex = d => d.map((item, i) => ({[i]: item}))
const remove = _ => null;
const append = d => d.join('');
const constructSet = d => d.reduce((acc, t) => {
if (t && t.type === 'setIdentifier') acc.push({set: t});
if (t && t.length) acc[acc.length - 1].phones = t;
return acc;
}, []);
const pipe = (...funcs) => d => funcs.reduce((acc, func) => func(acc), d);
const objFromArr = d => d.reduce((obj, item) => ({ ...obj, ...item }), {});
var grammar = {
Lexer: lexer,
ParserRules: [
{"name": "main$ebnf$1", "symbols": []},
{"name": "main$ebnf$1$subexpression$1", "symbols": ["_", "statement"]},
{"name": "main$ebnf$1", "symbols": ["main$ebnf$1", "main$ebnf$1$subexpression$1"], "postprocess": function arrpush(d) {return d[0].concat([d[1]]);}},
{"name": "main", "symbols": ["main$ebnf$1", "_"], "postprocess": pipe(
clearNull,
// recursive call to fix repeat?
d => d.map(t => t && t.length === 1 && t[0] ? t[0] : t),
d => d.map(t => t && t.length === 1 && t[0] ? t[0] : t),
flag('main'),
getTerminal,
) },
{"name": "_$ebnf$1$subexpression$1", "symbols": [(lexer.has("whiteSpace") ? {type: "whiteSpace"} : whiteSpace)]},
{"name": "_$ebnf$1", "symbols": ["_$ebnf$1$subexpression$1"], "postprocess": id},
{"name": "_$ebnf$1", "symbols": [], "postprocess": function(d) {return null;}},
{"name": "_", "symbols": ["_$ebnf$1"], "postprocess": remove},
{"name": "__", "symbols": [(lexer.has("whiteSpace") ? {type: "whiteSpace"} : whiteSpace)], "postprocess": remove},
{"name": "equal", "symbols": [(lexer.has("equal") ? {type: "equal"} : equal)], "postprocess": remove},
{"name": "statement", "symbols": ["comment"]},
{"name": "statement", "symbols": ["definition"], "postprocess": pipe(
d => d.flatMap(u => u && u.length ? u.filter(t => t && t.type !== 'comma' && t.type !== 'kwSet') : u),
// recursive call to fit repeat?
d => d.map(t => t && t.length === 1 && t[0] ? t[0] : t),
d => d.map(t => t && t.length === 1 && t[0] ? t[0] : t),
// may split from other definition statements
d => d.map(t => t && t.length > 1 ? ({ type: 'set', ...objFromArr(t) }) : null)
) },
{"name": "comment", "symbols": [(lexer.has("comment") ? {type: "comment"} : comment)], "postprocess": pipe(getTerminal, remove)},
{"name": "definition$ebnf$1", "symbols": []},
{"name": "definition$ebnf$1$subexpression$1", "symbols": ["setDefinition", (lexer.has("comma") ? {type: "comma"} : comma), "__"]},
{"name": "definition$ebnf$1", "symbols": ["definition$ebnf$1", "definition$ebnf$1$subexpression$1"], "postprocess": function arrpush(d) {return d[0].concat([d[1]]);}},
{"name": "definition", "symbols": [(lexer.has("kwSet") ? {type: "kwSet"} : kwSet), "__", "definition$ebnf$1", "setDefinition"], "postprocess": pipe(
// not yet sure why this call is required twice
d => d.map(u => u && u.length ? u.filter(t => t && t.type !== 'comma' && t.type !== 'kwSet') : u),
d => d.map(u => u && u.length ? u.filter(t => t && t.type !== 'comma' && t.type !== 'kwSet') : u),
d => d.map(u => u && u.length ? u.map(v => v.length ? v.filter(t => t && t.type !== 'comma' && t.type !== 'kwSet')[0] : v) : u),
clearNull,
) },
{"name": "setDefinition$ebnf$1$subexpression$1", "symbols": ["setAlias"]},
{"name": "setDefinition$ebnf$1", "symbols": ["setDefinition$ebnf$1$subexpression$1"], "postprocess": id},
{"name": "setDefinition$ebnf$1", "symbols": [], "postprocess": function(d) {return null;}},
{"name": "setDefinition", "symbols": [(lexer.has("setIdentifier") ? {type: "setIdentifier"} : setIdentifier), "setDefinition$ebnf$1", "__", "equal", "__", "setExpression"], "postprocess":
pipe(
d => d.filter(t => !!t && t.length !== 0),
d => d.map(u => u && u.length ? u.map(t => t && t.length ? t.filter(v => v && v.type !== 'comma') : t) : u),
d => d.map(t => t.type === 'setIdentifier' ? { setIdentifier: t.toString() } : t),
d => d.map(t => t && t.length && t[0].hasOwnProperty('setExpression') ? t[0] : t),
d => d.map(t => t.length ?
// pretty ugly ([ { type: 'aias', alias: [ string ] }] ) => { setAlias: str }
{ setAlias: t.reduce((aliases, token) => token && token.type === 'alias' ? [...aliases, ...token.alias] : aliases, [])[0] }
: t),
)
},
{"name": "setExpression", "symbols": [(lexer.has("openSquareBracket") ? {type: "openSquareBracket"} : openSquareBracket), "_", "phoneList", "_", (lexer.has("closeSquareBracket") ? {type: "closeSquareBracket"} : closeSquareBracket)]},
{"name": "setExpression$ebnf$1$subexpression$1", "symbols": ["setOperation"]},
{"name": "setExpression$ebnf$1", "symbols": ["setExpression$ebnf$1$subexpression$1"], "postprocess": id},
{"name": "setExpression$ebnf$1", "symbols": [], "postprocess": function(d) {return null;}},
{"name": "setExpression", "symbols": [(lexer.has("openCurlyBracket") ? {type: "openCurlyBracket"} : openCurlyBracket), "_", "setExpression$ebnf$1", "_", (lexer.has("closeCurlyBracket") ? {type: "closeCurlyBracket"} : closeCurlyBracket)], "postprocess":
pipe(
// filters commas and whitespace
d => d.filter(t => t && t.length),
d => d.map(t => t.map(u => u[0])),
flag('setExpression')
) },
{"name": "setAlias", "symbols": [(lexer.has("comma") ? {type: "comma"} : comma), "_", (lexer.has("setIdentifier") ? {type: "setIdentifier"} : setIdentifier)], "postprocess": pipe(
d => d && d.length ? d.filter(t => !!t) : d,
d => d.map(t => t.type === 'setIdentifier' ? t.toString() : null),
d => d.filter(t => !!t),
d => ({type: 'alias', alias: d }),
) },
{"name": "phoneList$ebnf$1", "symbols": []},
{"name": "phoneList$ebnf$1$subexpression$1$ebnf$1", "symbols": []},
{"name": "phoneList$ebnf$1$subexpression$1$ebnf$1$subexpression$1", "symbols": [(lexer.has("comma") ? {type: "comma"} : comma), "_"]},
{"name": "phoneList$ebnf$1$subexpression$1$ebnf$1", "symbols": ["phoneList$ebnf$1$subexpression$1$ebnf$1", "phoneList$ebnf$1$subexpression$1$ebnf$1$subexpression$1"], "postprocess": function arrpush(d) {return d[0].concat([d[1]]);}},
{"name": "phoneList$ebnf$1$subexpression$1", "symbols": [(lexer.has("phone") ? {type: "phone"} : phone), "phoneList$ebnf$1$subexpression$1$ebnf$1"]},
{"name": "phoneList$ebnf$1", "symbols": ["phoneList$ebnf$1", "phoneList$ebnf$1$subexpression$1"], "postprocess": function arrpush(d) {return d[0].concat([d[1]]);}},
{"name": "phoneList", "symbols": ["phoneList$ebnf$1"], "postprocess":
pipe(
d => d ? d[0].map(t => t.filter(u => u.type === 'phone').map(u => u.toString())) : d
)
},
{"name": "setOperation", "symbols": ["orOperation"]},
{"name": "setOperation", "symbols": [(lexer.has("identifier") ? {type: "identifier"} : identifier)], "postprocess": pipe(
d => d.type ? d : ({ identifier: d.toString(), type: 'identifier' })
)},
{"name": "orOperation", "symbols": ["_", "setOperation", "__", (lexer.has("kwSetOr") ? {type: "kwSetOr"} : kwSetOr), "__", "setOperation", "_"], "postprocess": pipe(
d => d.filter(d => !!d),
d => ({ type: 'operator', operator: 'or', operands: [ d[0], d[2] ] }),
) }
]
, ParserStart: "main"
}
if (typeof module !== 'undefined'&& typeof module.exports !== 'undefined') {
module.exports = grammar;
} else {
window.grammar = grammar;
}
})();

View file

@ -1,109 +0,0 @@
@{%
const { lexer } = require('./lexer.js');
const getTerminal = d => d ? d[0] : null;
const getAll = d => d.map((item, i) => ({ [i]: item }));
const flag = token => d => d.map(item => ({ [token]: item }))
const clearNull = d => d.filter(t => !!t && (t.length !== 1 || t[0])).map(t => t.length ? clearNull(t) : t);
const flagIndex = d => d.map((item, i) => ({[i]: item}))
const remove = _ => null;
const append = d => d.join('');
const constructSet = d => d.reduce((acc, t) => {
if (t && t.type === 'setIdentifier') acc.push({set: t});
if (t && t.length) acc[acc.length - 1].phones = t;
return acc;
}, []);
const pipe = (...funcs) => d => funcs.reduce((acc, func) => func(acc), d);
const objFromArr = d => d.reduce((obj, item) => ({ ...obj, ...item }), {});
%}
@lexer lexer
main -> (_ statement):* _
{% pipe(
clearNull,
// recursive call to fix repeat?
d => d.map(t => t && t.length === 1 && t[0] ? t[0] : t),
d => d.map(t => t && t.length === 1 && t[0] ? t[0] : t),
flag('main'),
getTerminal,
) %}
_ -> (%whiteSpace):?
{% remove %}
__ -> %whiteSpace
{% remove %}
equal -> %equal
{% remove %}
statement -> comment | definition
{% pipe(
d => d.flatMap(u => u && u.length ? u.filter(t => t && t.type !== 'comma' && t.type !== 'kwSet') : u),
// recursive call to fit repeat?
d => d.map(t => t && t.length === 1 && t[0] ? t[0] : t),
d => d.map(t => t && t.length === 1 && t[0] ? t[0] : t),
// may split from other definition statements
d => d.map(t => t && t.length > 1 ? ({ type: 'set', ...objFromArr(t) }) : null)
) %}
comment -> %comment
{% pipe(getTerminal, remove) %}
# SETS
definition -> %kwSet __ (setDefinition %comma __):* setDefinition
{% pipe(
// not yet sure why this call is required twice
d => d.map(u => u && u.length ? u.filter(t => t && t.type !== 'comma' && t.type !== 'kwSet') : u),
d => d.map(u => u && u.length ? u.filter(t => t && t.type !== 'comma' && t.type !== 'kwSet') : u),
d => d.map(u => u && u.length ? u.map(v => v.length ? v.filter(t => t && t.type !== 'comma' && t.type !== 'kwSet')[0] : v) : u),
clearNull,
) %}
setDefinition -> %setIdentifier (setAlias):? __ equal __ setExpression
{%
pipe(
d => d.filter(t => !!t && t.length !== 0),
d => d.map(u => u && u.length ? u.map(t => t && t.length ? t.filter(v => v && v.type !== 'comma') : t) : u),
d => d.map(t => t.type === 'setIdentifier' ? { setIdentifier: t.toString() } : t),
d => d.map(t => t && t.length && t[0].hasOwnProperty('setExpression') ? t[0] : t),
d => d.map(t => t.length ?
// pretty ugly ([ { type: 'aias', alias: [ string ] }] ) => { setAlias: str }
{ setAlias: t.reduce((aliases, token) => token && token.type === 'alias' ? [...aliases, ...token.alias] : aliases, [])[0] }
: t),
)
%}
setExpression -> %openSquareBracket _ phoneList _ %closeSquareBracket
| %openCurlyBracket _ (setOperation):? _ %closeCurlyBracket
{%
pipe(
// filters commas and whitespace
d => d.filter(t => t && t.length),
d => d.map(t => t.map(u => u[0])),
flag('setExpression')
) %}
setAlias -> %comma _ %setIdentifier
{% pipe(
d => d && d.length ? d.filter(t => !!t) : d,
d => d.map(t => t.type === 'setIdentifier' ? t.toString() : null),
d => d.filter(t => !!t),
d => ({type: 'alias', alias: d }),
) %}
phoneList -> (%phone (%comma _):* ):*
{%
pipe(
d => d ? d[0].map(t => t.filter(u => u.type === 'phone').map(u => u.toString())) : d
)
%}
setOperation -> orOperation
| %identifier
{% pipe(
d => d.type ? d : ({ identifier: d.toString(), type: 'identifier' })
)%}
orOperation -> _ setOperation __ %kwSetOr __ setOperation _
{% pipe(
d => d.filter(d => !!d),
d => ({ type: 'operator', operator: 'or', operands: [ d[0], d[2] ] }),
) %}

View file

@ -1,124 +0,0 @@
const moo = require("moo");
const lexer = moo.states({
main: {
comment: /;.*$/,
star: { match: /\*/, push: "epoch" },
slash: { match: /\//, push: "lexicon" },
// change so that identifiers are always upper, keywords are always lower, phones are always lower
kwSet: {
match: "set",
type: moo.keywords({ kwSet: "set " }),
push: "setDefinition",
},
identifier: { match: /[A-Za-z]+[\u00c0-\u03FFA-Za-z0-9\\-\\_]*/ },
openBracket: { match: /\[/, push: "feature" },
whiteSpace: { match: /\s+/, lineBreaks: true },
newLine: { match: /\n+/, lineBreaks: true },
},
epoch: {
identifier: {
match: /[A-Za-z]+[\u00c0-\u03FFA-Za-z0-9\\-\\_]*/,
push: "rule",
},
openParen: { match: /\(/, push: "ruleDefinition" },
pipe: { match: /\|/, pop: true },
greaterThan: /\>/,
arrow: /\-\>/,
hash: /#/,
slash: /\//,
dot: /\./,
underscore: /\_/,
newLine: { match: /\n/, lineBreaks: true },
},
ruleDefinition: {
doubleTick: { match: /``/, push: "ruleName" },
singleTick: { match: /`/, push: "ruleDescription" },
// push rule
closeParen: { match: /\)/, pop: true },
newLine: { match: /\n/, lineBreaks: true },
},
ruleName: {
ruleName: { match: /.+(?=``)/ },
doubleTick: { match: /``/, pop: true },
},
ruleDescription: {
ruleDescription: { match: /.+(?=`)/ },
singleTick: { match: /`/, pop: true },
},
rule: {
openSquareBracket: { match: /\[/, push: "ruleFeature" },
// whiteSpace: { match: /\s/ },
newLine: { match: /\n/, pop: true, lineBreaks: true },
},
ruleFeature: {
ruleFeature: { match: /[A-Za-z]+[\u00c0-\u03FFA-Za-z0-9\\-\\_]*/ },
closeBracket: { match: /\]/, pop: true },
newLine: { match: /\n/, lineBreaks: true },
},
lexicon: {
slash: { match: /\//, pop: true },
newLine: { match: /\n/, lineBreaks: true },
},
feature: {
closeBracket: { match: /\]/, pop: true },
positiveAssignment: /\+=/,
negativeAssignment: /\-=/,
newLine: { match: /\n/, lineBreaks: true },
},
setDefinition: {
comment: /;.*$/,
setIdentifier: { match: /[A-Z]+[A-Z_]*/ },
openCurlyBracket: { match: /\{/, push: "setOperation" },
equal: /=/,
openSquareBracket: /\[/,
phone: /[\u00c0-\u03FFa-z]+/,
closeSquareBracket: { match: /\]/ },
comma: { match: /,/, push: "commaOperation" },
whiteSpace: { match: /[\t ]+/ },
newLine: { match: /\n/, pop: true, lineBreaks: true },
},
setOperation: {
closeCurlyBracket: { match: /\}/, pop: true },
// ! restrict identifiers
keyword: {
match: ["not", "and", "or", "nor", "in", "yield", "concat", "dissoc"],
type: moo.keywords({
kwSetNot: "not",
kwSetAnd: "and",
kwSetOr: "or",
kwSetNor: "nor",
kwSetIn: "in",
kwSetYield: "yield",
kwSetConcat: "concat",
kwSetDissoc: "dissoc",
}),
},
identifier: /[A-Z]+[A-Z_]+/,
whiteSpace: { match: /\s+/, lineBreaks: true },
openSquareBracket: /\[/,
closeSquareBracket: /\]/,
identifier: /[A-Z]+[A-Z_]*/,
phone: /[\u00c0-\u03FFa-z]+/,
},
commaOperation: {
// if comma is detected during a definition, the commaOperation consumes all white space and pops back to definition
// this prevents popping back to main
comment: /\s*;.*$/,
whiteSpace: { match: /\s+/, lineBreaks: true, pop: true },
newLine: { match: /\n/, lineBreaks: true, pop: true },
},
});
module.exports = { lexer };

View file

@ -1,4 +0,0 @@
const nearley = require("nearley");
const grammar = require("./grammar.js");
export const parser = () => new nearley.Parser(nearley.Grammar.fromCompiled(grammar));

View file

@ -1,810 +0,0 @@
export const assertionData = {
simpleComment: {
latl: `; comment`,
tokens: [{ type: "comment", value: "; comment" }],
AST: {
main: [],
},
code: "",
},
simpleSetDefinition: {
latl: `set NASAL_PULMONIC_CONSONANTS = [ m̥, m, ɱ ]`,
tokens: [
{ type: "kwSet", value: "set" },
{ type: "whiteSpace", value: " " },
{ type: "setIdentifier", value: "NASAL_PULMONIC_CONSONANTS" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "m̥" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "m" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɱ" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
],
AST: {
main: [
{
type: "set",
setIdentifier: "NASAL_PULMONIC_CONSONANTS",
setExpression: ["m̥", "m", "ɱ"],
},
],
},
code: "",
},
commaSetDefinition: {
latl: `
set NASAL_PULMONIC_CONSONANTS = [ m̥, m, ɱ, n̼, n̥, n, ɳ̊, ɳ, ɲ̊, ɲ, ŋ, ̊ŋ, ɴ ],
STOP_PULMONIC_CONSONANTS = [ p, b, p̪, b̪, t̼, d̼, t, d, ʈ, ɖ, c, ɟ, k, ɡ, q, ɢ, ʡ, ʔ ]`,
tokens: [
{ type: "whiteSpace", value: "\n" },
{ type: "kwSet", value: "set" },
{ type: "whiteSpace", value: " " },
{ type: "setIdentifier", value: "NASAL_PULMONIC_CONSONANTS" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "m̥" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "m" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɱ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "n̼" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "n̥" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "n" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɳ̊" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɳ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɲ̊" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɲ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ŋ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "̊ŋ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɴ" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "STOP_PULMONIC_CONSONANTS" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "p" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "b" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "p̪" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "b̪" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "t̼" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "d̼" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "t" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "d" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ʈ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɖ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "c" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɟ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "k" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɡ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "q" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɢ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ʡ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ʔ" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
],
AST: {
main: [
{
type: "set",
setIdentifier: "NASAL_PULMONIC_CONSONANTS",
setExpression: [
"m̥",
"m",
"ɱ",
"n̼",
"n̥",
"n",
"ɳ̊",
"ɳ",
"ɲ̊",
"ɲ",
"ŋ",
"̊ŋ",
"ɴ",
],
},
{
type: "set",
setIdentifier: "STOP_PULMONIC_CONSONANTS",
setExpression: [
"p",
"b",
"p̪",
"b̪",
"t̼",
"d̼",
"t",
"d",
"ʈ",
"ɖ",
"c",
"ɟ",
"k",
"ɡ",
"q",
"ɢ",
"ʡ",
"ʔ",
],
},
],
},
},
setAliasDefinition: {
latl: `
set NASAL_PULMONIC_CONSONANTS, N = [ m̥, m, ɱ, n̼, n̥, n, ɳ̊, ɳ, ɲ̊, ɲ, ŋ, ̊ŋ, ɴ ]`,
tokens: [
{ type: "whiteSpace", value: "\n" },
{ type: "kwSet", value: "set" },
{ type: "whiteSpace", value: " " },
{ type: "setIdentifier", value: "NASAL_PULMONIC_CONSONANTS" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "setIdentifier", value: "N" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "m̥" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "m" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɱ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "n̼" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "n̥" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "n" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɳ̊" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɳ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɲ̊" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɲ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ŋ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "̊ŋ" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "ɴ" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
],
AST: {
main: [
{
type: "set",
setIdentifier: "NASAL_PULMONIC_CONSONANTS",
setAlias: "N",
setExpression: [
"m̥",
"m",
"ɱ",
"n̼",
"n̥",
"n",
"ɳ̊",
"ɳ",
"ɲ̊",
"ɲ",
"ŋ",
"̊ŋ",
"ɴ",
],
},
],
},
},
setDefinitionJoin: {
latl: `
set CLICK_CONSONANTS = { TENUIS_CLICK_CONSONANTS or VOICED_CLICK_CONSONANTS }`,
tokens: [
{ type: "whiteSpace", value: "\n" },
{ type: "kwSet", value: "set" },
{ type: "whiteSpace", value: " " },
{ type: "setIdentifier", value: "CLICK_CONSONANTS" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "TENUIS_CLICK_CONSONANTS" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetOr", value: "or" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "VOICED_CLICK_CONSONANTS" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
],
AST: {
main: [
{
type: "set",
setIdentifier: "CLICK_CONSONANTS",
setExpression: [
{
type: "operator",
operator: "or",
operands: [
{
type: "identifier",
identifier: "TENUIS_CLICK_CONSONANTS",
},
{
type: "identifier",
identifier: "VOICED_CLICK_CONSONANTS",
},
],
},
],
},
],
},
},
setDefinitionMultiJoin: {
latl: `
set CLICK_CONSONANTS = { TENUIS_CLICK_CONSONANTS or VOICED_CLICK_CONSONANTS
or NASAL_CLICK_CONSONANTS or L_CLICK_CONSONANTS
}`,
tokens: [
{ type: "whiteSpace", value: "\n" },
{ type: "kwSet", value: "set" },
{ type: "whiteSpace", value: " " },
{ type: "setIdentifier", value: "CLICK_CONSONANTS" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "TENUIS_CLICK_CONSONANTS" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetOr", value: "or" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "VOICED_CLICK_CONSONANTS" },
{ type: "whiteSpace", value: "\n " },
{ type: "kwSetOr", value: "or" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "NASAL_CLICK_CONSONANTS" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetOr", value: "or" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "L_CLICK_CONSONANTS" },
{ type: "whiteSpace", value: " \n " },
{ type: "closeCurlyBracket", value: "}" },
],
AST: {
main: [
{
type: "set",
setIdentifier: "CLICK_CONSONANTS",
setExpression: [
{
type: "operator",
operator: "or ",
operands: [
{
type: "identifier",
identifier: "TENUIS_CLICK_CONSONANTS",
},
{
type: "operator",
operator: "or",
operands: [
{
type: "identifier",
identifier: "VOICED_CLICK_CONSONANTS",
},
{
type: "operator",
operator: "or",
operands: [
{
type: "identifier",
identifier: "NASAL_CLICK_CONSONANTS",
},
{
type: "identifier",
operands: "L_CLICK_CONSONANTS",
},
],
},
],
},
],
},
],
},
],
},
},
setDefinitionYield: {
latl: `
set NASAL_VOWELS = { [ V ] in ORAL_VOWELS yield [ Ṽ ] },
SHORT_NASAL_VOWELS = { [ Vː ] in NASAL_VOWELS yield [ V ]ː },
LONG_NASAL_VOWELS = { [ Vː ] in NASAL_VOWELS }`,
tokens: [
{ type: "whiteSpace", value: "\n" },
{ type: "kwSet", value: "set" },
{ type: "whiteSpace", value: " " },
{ type: "setIdentifier", value: "NASAL_VOWELS" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "V" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetIn", value: "in" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "ORAL_VOWELS" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetYield", value: "yield" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "V" },
{ type: "phone", value: "̃" }, // test display for COMBINING TILDE OVERLAY is deceiving
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SHORT_NASAL_VOWELS" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "V" },
{ type: "phone", value: "ː" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetIn", value: "in" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "NASAL_VOWELS" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetYield", value: "yield" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "V" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "phone", value: "ː" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "LONG_NASAL_VOWELS" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "V" },
{ type: "phone", value: "ː" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetIn", value: "in" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "NASAL_VOWELS" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
],
},
setOperationsJoin: {
latl: `
; ---- set join operations non-mutable!
set SET_C = { SET_A not SET_B }, ; left anti join
SET_D = { SET_A and SET_B }, ; inner join
SET_E = { SET_A or SET_B }, ; full outer join
SET_F = { not SET_A }, ; = { GLOBAL not SET_A }
SET_G = { not SET_A nor SET_B } ; = { GLOBAL not { SET_A or SET_B } }`,
tokens: [
{ type: "whiteSpace", value: "\n" },
{ type: "comment", value: "; ---- set join operations non-mutable! " },
{ type: "whiteSpace", value: "\n" },
{ type: "kwSet", value: "set" },
{ type: "whiteSpace", value: " " },
{ type: "setIdentifier", value: "SET_C" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetNot", value: "not" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_B" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{ type: "comment", value: " ; left anti join" },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_D" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetAnd", value: "and" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_B" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{ type: "comment", value: " ; inner join" },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_E" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetOr", value: "or" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_B" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{ type: "comment", value: " ; full outer join" },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_F" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetNot", value: "not" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{ type: "comment", value: " ; = { GLOBAL not SET_A }" },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_G" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetNot", value: "not" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetNor", value: "nor" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_B" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "whiteSpace", value: " " },
{ type: "comment", value: "; = { GLOBAL not { SET_A or SET_B } }" },
],
},
setOperations: {
latl: `
; ---- set character operations - non-mutable!
set SET_B = { [ Xy ] in SET_A }, ; FILTER: where X is any character and y is a filtering character
SET_C = { SET_A yield [ Xy ] }, ; CONCATENATE: performs transformation with (prepended or) appended character
SET_D = { SET_A yield [ X concat y ] },
SET_E = { SET_A yield [ y concat X ] },
SET_F = { SET_A yield y[ X ] }, ; DISSOCIATE: performs transformation removing prepended (or appended) character
SET_G = { SET_A yield y dissoc [ X ] },
SET_H = { SET_A yield [ X ] dissoc y },
SET_I = { [ Xy ] in SET_A yield [ X ]y } ; combined FILTER and DISSOCIATE`,
tokens: [
{ type: "whiteSpace", value: "\n" },
{
type: "comment",
value: "; ---- set character operations - non-mutable!",
},
{ type: "whiteSpace", value: "\n" },
{ type: "kwSet", value: "set" },
{ type: "whiteSpace", value: " " },
{ type: "setIdentifier", value: "SET_B" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "X" },
{ type: "phone", value: "y" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetIn", value: "in" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{
type: "comment",
value:
" ; FILTER: where X is any character and y is a filtering character",
},
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_C" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetYield", value: "yield" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "X" },
{ type: "phone", value: "y" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{
type: "comment",
value:
" ; CONCATENATE: performs transformation with (prepended or) appended character",
},
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_D" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetYield", value: "yield" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "X" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetConcat", value: "concat" },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "y" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_E" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetYield", value: "yield" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "y" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetConcat", value: "concat" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "X" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_F" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetYield", value: "yield" },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "y" },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "X" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{
type: "comment",
value:
" ; DISSOCIATE: performs transformation removing prepended (or appended) character",
},
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_G" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetYield", value: "yield" },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "y" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetDissoc", value: "dissoc" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "X" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_H" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetYield", value: "yield" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "X" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetDissoc", value: "dissoc" },
{ type: "whiteSpace", value: " " },
{ type: "phone", value: "y" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "comma", value: "," },
{ type: "whiteSpace", value: "\n " },
{ type: "setIdentifier", value: "SET_I" },
{ type: "whiteSpace", value: " " },
{ type: "equal", value: "=" },
{ type: "whiteSpace", value: " " },
{ type: "openCurlyBracket", value: "{" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "X" },
{ type: "phone", value: "y" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetIn", value: "in" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "SET_A" },
{ type: "whiteSpace", value: " " },
{ type: "kwSetYield", value: "yield" },
{ type: "whiteSpace", value: " " },
{ type: "openSquareBracket", value: "[" },
{ type: "whiteSpace", value: " " },
{ type: "identifier", value: "X" },
{ type: "whiteSpace", value: " " },
{ type: "closeSquareBracket", value: "]" },
{ type: "phone", value: "y" },
{ type: "whiteSpace", value: " " },
{ type: "closeCurlyBracket", value: "}" },
{ type: "whiteSpace", value: " " },
{ type: "comment", value: "; combined FILTER and DISSOCIATE" },
],
},
};

View file

@ -1,10 +0,0 @@
import { assertionData } from './assertionData';
import { codeGenerator } from '../codeGenerator';
describe('codeGenerator', () => {
it('parses simple comment', () => {
const { latl, code } = assertionData.simpleComment;
const generatedCode = codeGenerator(latl);
expect(generatedCode).toEqual(code);
});
})

View file

@ -1,71 +0,0 @@
import { lexer } from '../lexer';
import { assertionData } from './assertionData';
describe('lexer', () => {
const getToken = obj => obj ? formatToken(obj) : null;
const formatToken = obj => ({ type: obj.type, value: obj.value });
const getStream = latl => {
lexer.reset(latl);
let token = getToken(lexer.next());
let stream = [];
do {
stream = [...stream, token]
token = getToken(lexer.next());
} while (token);
return stream;
}
it('lexes simple comment', () => {
const { latl, tokens } = assertionData.simpleComment;
const stream = getStream(latl);
expect(stream).toStrictEqual(tokens);
});
// it('lexes simple * and identifier', () => {
// lexer.reset('*proto');
// const stream = [ getToken(lexer.next()), getToken(lexer.next()) ];
// expect(stream).toStrictEqual([ { type: 'star', value: '*' }, { type: 'identifier', value: 'proto' } ]);
// })
it('lexes set and identifier', () => {
const { latl, tokens } = assertionData.simpleSetDefinition;
const stream = getStream(latl);
expect(stream).toStrictEqual(tokens);
})
it('lexes multiple set definitions with comma operator', () => {
const { latl, tokens } = assertionData.commaSetDefinition;
const stream = getStream(latl);
expect(stream).toStrictEqual(tokens);
});
it('lexes set definition with alias', () => {
const { latl, tokens } = assertionData.setAliasDefinition;
const stream = getStream(latl);
expect(stream).toStrictEqual(tokens);
});
it('lexes set definition with set join', () => {
const { latl, tokens } = assertionData.setDefinitionJoin;
const stream = getStream(latl);
expect(stream).toStrictEqual(tokens);
});
it('lexes set definition with yield operation', () => {
const { latl, tokens } = assertionData.setDefinitionYield;
const stream = getStream(latl);
expect(stream).toStrictEqual(tokens);
});
it('lexes all set join operations', () => {
const { latl, tokens } = assertionData.setOperationsJoin;
const stream = getStream(latl);
expect(stream).toStrictEqual(tokens);
});
it('lexes set filter, concat, and dissoc operations', () => {
const { latl, tokens } = assertionData.setOperations;
const stream = getStream(latl);
expect(stream).toStrictEqual(tokens);
})
})

View file

@ -1,180 +0,0 @@
import { lexer } from "../lexer";
import { parser } from "../parser";
import { assertionData } from "./assertionData";
describe("parser", () => {
it("parses simple comment", () => {
const { latl, AST } = assertionData.simpleComment;
const feedResults = parser().feed(latl).results;
expect(feedResults.length).toBe(1);
expect(feedResults[0]).toStrictEqual(AST);
});
it("parses simple set definition", () => {
const { latl, AST } = assertionData.simpleSetDefinition;
const feedResults = parser().feed(latl).results;
expect(feedResults.length).toBe(1);
expect(feedResults[0]).toStrictEqual(AST);
});
it("parses multiple set definitions with comma operator", () => {
const { latl, AST } = assertionData.commaSetDefinition;
const feedResults = parser().feed(latl).results;
expect(feedResults.length).toBe(1);
expect(feedResults[0]).toStrictEqual(AST);
});
it("lexes set definition with alias", () => {
const { latl, AST } = assertionData.setAliasDefinition;
const feedResults = parser().feed(latl).results;
expect(feedResults[0]).toStrictEqual(AST);
});
it.skip("lexes set definition with set join", () => {
const { latl, AST } = assertionData.setDefinitionJoin;
const feedResults = parser().feed(latl).results;
expect(feedResults[0]).toStrictEqual(AST);
});
it.todo(
"lexes set definition with yield operation"
// , () => {
// const { latl, tokens } = assertionData.setDefinitionYield;
// const stream = getStream(latl);
// expect(stream).toStrictEqual(tokens);
// }
);
it.todo(
"lexes all set join operations"
// , () => {
// const { latl, tokens } = assertionData.setOperationsJoin;
// const stream = getStream(latl);
// expect(stream).toStrictEqual(tokens);
// }
);
it.todo(
"lexes set filter, concat, and dissoc operations"
// , () => {
// const { latl, tokens } = assertionData.setOperations;
// const stream = getStream(latl);
// expect(stream).toStrictEqual(tokens);
// }
);
});
// {
// "set":
// [
// [
// [
// {
// "col": 5,
// "line": 2,
// "lineBreaks": 0,
// "offset": 5,
// "text": "NASAL_PULMONIC_CONSONANTS",
// "toString": [tokenToString],
// "type": "setIdentifier",
// "value": "NASAL_PULMONIC_CONSONANTS",
// },
// null,
// {
// "col": 45,
// "line": 2,
// "lineBreaks": 0,
// "offset": 45,
// "text": "=",
// "toString": [tokenToString],
// "type": "equal",
// "value": "=",
// },
// null,
// [
// [
// {
// "col": 49,
// "line": 2,
// "lineBreaks": 0,
// "offset": 49,
// "text": "m̥",
// "toString": [tokenToString],
// "type": "phone",
// "value": "m̥",
// },
// {
// "col": 91,
// "line": 2,
// "lineBreaks": 0,
// "offset": 91,
// "text": "ɴ",
// "toString": [tokenToString],
// "type": "phone",
// "value": "ɴ",
// },
// ],
// ],
// {
// "col": 94,
// "line": 2,
// "lineBreaks": 0,
// "offset": 94,
// "text": ",",
// "toString": [tokenToString],
// "type": "comma",
// "value": ",",
// },
// null,
// ],
// ],
// - "setIdentifier": "STOP_PULMONIC_CONSONANTS",
// {
// "col": 5,
// "line": 3,
// "lineBreaks": 0,
// "offset": 100,
// "text": "STOP_PULMONIC_CONSONANTS",
// "toString": [tokenToString],
// "type": "setIdentifier",
// "value": "STOP_PULMONIC_CONSONANTS",
// },
// null,
// {
// "col": 45,
// "line": 3,
// "lineBreaks": 0,
// "offset": 140,
// "text": "=",
// "toString": [tokenToString],
// "type": "equal",
// "value": "=",
// },
// null,
// [
// [
// {
// "col": 49,
// "line": 3,
// "lineBreaks": 0,
// "offset": 144,
// "text": "p",
// "toString": [tokenToString],
// "type": "phone",
// "value": "p",
// },
// {
// "col": 104,
// "line": 3,
// "lineBreaks": 0,
// "offset": 199,
// "text": "ʔ",
// "toString": [tokenToString],
// "type": "phone",
// "value": "ʔ",
// },
// ],
// ],
// ],
// "token": "kwSet",
// }

View file

@ -0,0 +1,2 @@
body{margin:0;font-family:Catamaran,Arial,Helvetica,sans-serif;background-color:#281734;color:#d5bfbf}body input[type=text],body textarea{background-color:#1d191a;color:#e8e22e;border:1px solid #d5bfbf;font-family:Fira Code,monospace}body code{font-family:Fira Code,monospace}body p.error{color:red}.App{text-align:center}.App-logo{height:40vmin}.App-header{min-height:100vh;display:flex;flex-direction:column;align-items:center;justify-content:center;font-size:calc(10px + 2vmin)}.App-link{color:#09d3ac}div.App{max-height:100vh;max-width:100vw;line-height:1.25em;padding:1em}div.App h1{font-size:2em;padding:1em 0}div.App h3{font-size:1.25em;padding:.5em 0}div.App h5{font-size:1.1em;padding:.1em 0;font-weight:800}div.App div.PhonoChangeApplier{display:grid;width:100%;place-items:center center;grid-template-columns:repeat(auto-fit,minmax(25em,1fr));grid-template-rows:repeat(auto-fill,minmax(300px,1fr))}div.App div.PhonoChangeApplier div{max-width:100%;max-height:85vh;margin:1em;overflow-y:scroll}div.App button.form,div.App input[type=button].form,div.App input[type=submit].form{height:2em;border-radius:.25em;border-color:transparent;margin:.2em auto;width:10em}div.App button.form--add,div.App input[type=button].form--add,div.App input[type=submit].form--add{background-color:#adff2f;color:#000}div.App button.form--remove,div.App input[type=button].form--remove,div.App input[type=submit].form--remove{background-color:red;color:#fff}div.Features ul.Features__list{width:100%}div.Features ul.Features__list li{display:grid;grid-template-columns:10fr 10fr 1fr;margin:.5em 0;place-items:center center}div.Features ul.Features__list li span.feature--names-and-phones{display:grid;grid-template-columns:repeat(auto-fit,minmax(100px,1fr))}div.Features ul.Features__list li span.feature-name{font-weight:600}div.Features form{display:flex;flex-flow:column}div.Features form input{margin:.1em;font-size:1em}div.Features button.delete-feature{background-color:red;border-color:transparent;border-radius:.5em;color:#fff;max-height:1.5em}div.Options form{display:grid;grid-template-columns:1fr 1fr;grid-gap:.5em;gap:.5em}div.Output div.Output__container{display:flex;flex-flow:row wrap}div.Output div.Output-epoch{display:flex;flex-flow:column}div.Output div.Output-epoch p.lexicon{display:grid;grid-template-columns:repeat(auto-fit,minmax(5em,1fr))}
/*# sourceMappingURL=main.3576d19b.chunk.css.map */

View file

@ -0,0 +1 @@
{"version":3,"sources":["index.scss","../public/stylesheets/_variables.scss","App.css","PhonoChangeApplier.scss","Features.scss","Options.scss","Output.scss"],"names":[],"mappings":"AAEA,KACE,QAAS,CACT,gDAAsD,CACtD,wBCJmB,CDKnB,aCJe,CDAjB,oCAOI,wBCLuB,CDMvB,aCPmB,CDQnB,wBCTa,CDUb,+BAAmC,CAVvC,UAcI,+BAAmC,CAdvC,aAkBI,SCfc,CCLlB,KACE,iBACF,CAEA,UACE,aACF,CAEA,YACE,gBAAiB,CACjB,YAAa,CACb,qBAAsB,CACtB,kBAAmB,CACnB,sBAAuB,CACvB,4BACF,CAEA,UACE,aACF,CCnBA,QACE,gBAAiB,CACjB,eAAgB,CAChB,kBAAmB,CACnB,WAAY,CAJd,WAOI,aAAc,CACd,aAAc,CARlB,WAYI,gBAAiB,CACjB,cAAgB,CAbpB,WAiBI,eAAgB,CAChB,cAAgB,CAChB,eAAgB,CAnBpB,+BAuBI,YAAa,CACb,UAAW,CACX,yBAA0B,CAC1B,uDAA0D,CAC1D,sDAAyD,CA3B7D,mCA8BM,cAAe,CACf,eAAgB,CAChB,UAAW,CACX,iBAAkB,CAjCxB,oFAsCI,UAAW,CACX,mBAAqB,CACrB,wBAAyB,CACzB,gBAAkB,CAClB,UAAW,CA1Cf,mGA8CI,wBAA6B,CAC7B,UAAY,CA/ChB,4GAmDI,oBAAqB,CACrB,UAAY,CCpDhB,+BAGI,UAAW,CAHf,kCAMM,YAAa,CACb,mCAAoC,CACpC,aAAe,CACf,yBAA0B,CAThC,iEAYQ,YAAa,CACb,wDAA2D,CAbnE,oDAiBQ,eAAgB,CAjBxB,kBAuBI,YAAa,CACb,gBAAiB,CAxBrB,wBA2BM,WAAa,CACb,aAAc,CA5BpB,mCAiCI,oBAAqB,CACrB,wBAAyB,CACzB,kBAAoB,CACpB,UAAY,CACZ,gBAAiB,CCrCrB,iBAGI,YAAa,CACb,6BAA8B,CAC9B,aAAA,CAAA,QAAU,CCLd,iCAGI,YAAa,CACb,kBAAmB,CAJvB,4BAQI,YAAa,CACb,gBAAiB,CATrB,sCAYM,YAAa,CACb,sDAAyD","file":"main.3576d19b.chunk.css","sourcesContent":["@import '../public/stylesheets/variables';\n\nbody {\n margin: 0;\n font-family: 'Catamaran', Arial, Helvetica, sans-serif;\n background-color: map-get($colors, 'main--bg');\n color: map-get($colors, 'main');\n \n textarea, input[type=\"text\"] {\n background-color: map-get($colors, 'text-input--bg');\n color: map-get($colors, 'text-input');\n border: 1px solid map-get($colors, 'main');\n font-family: 'Fira Code', monospace;\n }\n\n code {\n font-family: 'Fira Code', monospace;\n }\n\n p.error {\n color: map-get($colors, 'error');\n }\n\n}\n","$colors: (\n \"main--bg\": #281734,\n \"main\": #d5bfbf,\n \"text-input\": #e8e22e,\n \"text-input--bg\": #1d191a,\n \"error\": #ff0000\n );",".App {\n text-align: center;\n}\n\n.App-logo {\n height: 40vmin;\n}\n\n.App-header {\n min-height: 100vh;\n display: flex;\n flex-direction: column;\n align-items: center;\n justify-content: center;\n font-size: calc(10px + 2vmin);\n}\n\n.App-link {\n color: #09d3ac;\n}\n","div.App {\n max-height: 100vh;\n max-width: 100vw;\n line-height: 1.25em;\n padding: 1em;\n\n h1 {\n font-size: 2em;\n padding: 1em 0;\n }\n\n h3 {\n font-size: 1.25em;\n padding: 0.5em 0;\n }\n\n h5 {\n font-size: 1.1em;\n padding: 0.1em 0;\n font-weight: 800;\n }\n\n div.PhonoChangeApplier {\n display: grid;\n width: 100%;\n place-items: center center;\n grid-template-columns: repeat(auto-fit, minmax(25em, 1fr));\n grid-template-rows: repeat(auto-fill, minmax(300px, 1fr));\n \n div {\n max-width: 100%;\n max-height: 85vh;\n margin: 1em;\n overflow-y: scroll;\n }\n }\n\n button.form, input[type=\"submit\"].form, input[type=\"button\"].form {\n height: 2em;\n border-radius: 0.25em;\n border-color: transparent;\n margin: 0.2em auto;\n width: 10em;\n }\n\n button.form--add, input[type=\"submit\"].form--add, input[type=\"button\"].form--add{\n background-color: greenyellow;\n color: black;\n }\n \n button.form--remove, input[type=\"submit\"].form--remove, input[type=\"button\"].form--remove {\n background-color: red;\n color: white;\n }\n}","div.Features {\n\n ul.Features__list {\n width: 100%;\n \n li {\n display: grid;\n grid-template-columns: 10fr 10fr 1fr;\n margin: 0.5em 0;\n place-items: center center;\n \n span.feature--names-and-phones {\n display: grid;\n grid-template-columns: repeat(auto-fit, minmax(100px, 1fr));\n }\n \n span.feature-name {\n font-weight: 600;\n }\n }\n }\n\n form {\n display: flex;\n flex-flow: column;\n\n input {\n margin: 0.1em;\n font-size: 1em;\n }\n }\n\n button.delete-feature {\n background-color: red;\n border-color: transparent;\n border-radius: 0.5em;\n color: white;\n max-height: 1.5em;\n }\n}","div.Options {\n\n form {\n display: grid;\n grid-template-columns: 1fr 1fr;\n gap: 0.5em;\n }\n\n}","div.Output {\n\n div.Output__container {\n display: flex;\n flex-flow: row wrap;\n }\n\n div.Output-epoch {\n display: flex;\n flex-flow: column;\n\n p.lexicon {\n display: grid;\n grid-template-columns: repeat(auto-fit, minmax(5em, 1fr));\n }\n }\n\n}"]}

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,32 @@
/*
object-assign
(c) Sindre Sorhus
@license MIT
*/
/** @license React v16.12.0
* react.production.min.js
*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
/** @license React v16.12.0
* react-dom.production.min.js
*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
/** @license React v0.18.0
* scheduler.production.min.js
*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,2 @@
!function(e){function r(r){for(var n,a,l=r[0],p=r[1],f=r[2],c=0,s=[];c<l.length;c++)a=l[c],Object.prototype.hasOwnProperty.call(o,a)&&o[a]&&s.push(o[a][0]),o[a]=0;for(n in p)Object.prototype.hasOwnProperty.call(p,n)&&(e[n]=p[n]);for(i&&i(r);s.length;)s.shift()();return u.push.apply(u,f||[]),t()}function t(){for(var e,r=0;r<u.length;r++){for(var t=u[r],n=!0,l=1;l<t.length;l++){var p=t[l];0!==o[p]&&(n=!1)}n&&(u.splice(r--,1),e=a(a.s=t[0]))}return e}var n={},o={1:0},u=[];function a(r){if(n[r])return n[r].exports;var t=n[r]={i:r,l:!1,exports:{}};return e[r].call(t.exports,t,t.exports,a),t.l=!0,t.exports}a.m=e,a.c=n,a.d=function(e,r,t){a.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},a.r=function(e){"undefined"!==typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},a.t=function(e,r){if(1&r&&(e=a(e)),8&r)return e;if(4&r&&"object"===typeof e&&e&&e.__esModule)return e;var t=Object.create(null);if(a.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:e}),2&r&&"string"!=typeof e)for(var n in e)a.d(t,n,function(r){return e[r]}.bind(null,n));return t},a.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return a.d(r,"a",r),r},a.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},a.p="/feature-change-applier/";var l=this["webpackJsonpfeature-change-applier"]=this["webpackJsonpfeature-change-applier"]||[],p=l.push.bind(l);l.push=r,l=l.slice();for(var f=0;f<l.length;f++)r(l[f]);var i=p;t()}([]);
//# sourceMappingURL=runtime-main.7788e262.js.map

File diff suppressed because one or more lines are too long