[Work in Progress]
Go module providing classic (rule-based) and dictionary backed transliterators for Avro Phonetic.
This implementation is also the fastest dictionary based suggestion generator as far as I know. Primarily because this does not scan through the dictionary looking for regular-expression match and use a Trie instead.
Comparing apples to oranges (because why not), this is ~100 times faster than previous JavaScript and regular-expression based suggestion generator (tested in Node.js env).
This module is intended to be used as a library.
However, for quickly checking the output there is a demo CLI. Run the following command:
go run ./cmd/avrophoneticdemo shadhinota
go get -u github.com/mugli/libavrophonetic
package main
import (
"fmt"
"os"
"github.com/mugli/libavrophonetic/databasedconv"
"github.com/mugli/libavrophonetic/rulebasedconv"
)
func main() {
input := "bangla"
rulebasedConverter := rulebasedconv.NewConverter()
databasedConverter, _ := databasedconv.NewConverter() // ignoring error for brevity
rulebasedOutput := rulebasedConverter.ConvertWord(input)
databasedOutput := databasedConverter.ConvertWord(input)
fmt.Printf("(Rulebased conversion) %s = %s \n", input, rulebasedOutput) // বাংলা
fmt.Printf("(Databased conversion) %s = %v \n", input, databasedOutput) // [বাংলা বাঙলা]
}
https://pkg.go.dev/github.com/mugli/libavrophonetic
To run tests/see coverage, run the following commands:
make test
make test-cover
Instead of using plain text data-files, this module uses a gob encoded files for faster data loading (aka, Trie generation).
Also, the gob files gets embedded with the binary during compile time using embed package introduced in Go 1.16.
If you change source files (having soure-
prefix in the filenames) in ./data
directory, run the following command to re-generate the binary data files:
make generate-data
The git blame is not showing Tahmid's name in this repo because this was started from scratch, but both me and @tahmidsadik had a lot of fun making the initial prototype for this.