Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cyrillic languages support #51

Closed
mahnunchik opened this issue Feb 19, 2019 · 12 comments
Closed

Cyrillic languages support #51

mahnunchik opened this issue Feb 19, 2019 · 12 comments

Comments

@mahnunchik
Copy link

Hello,

I've faced with the following behaviour.

This example works as expected:

const FlexSearch = require('flexsearch');
const index = new FlexSearch();

index.add(1, 'Foobar')
console.log(index.search('Foobar'));
// [ 1 ]

But this one shows no results.

const FlexSearch = require('flexsearch');
const index = new FlexSearch();

index.add(1, 'Фообар')
console.log(index.search('Фообар'));
// []

I've tested in node and in browser.

@ts-thomas
Copy link
Contributor

ts-thomas commented Feb 19, 2019

Hello. Try this settings: https://github.com/nextapps-de/flexsearch#cjk-word-break-chinese-japanese-korean

var index = FlexSearch.create({
    encode: false,
    tokenize: function(str){
        return str.replace(/[\x00-\x7F]/g, "").split("");
    }
});
index.add(1, "Фообар");
var results = index.search("Фообар");

When you just want to index whole words (not partials) then this may be better:

var index = FlexSearch.create({
    encode: false,
    tokenize: function(str){
        return str.split(/\s+/);
    }
});

@mahnunchik
Copy link
Author

Hello @ts-thomas

Yep, it works with the following options.

const index = new FlexSearch({
  tokenize: function(str){
    return str.replace(/[\x00-\x7F]/g, "").split("");
  }
});

But why it doesn't work with the forward tokenizer?

@mahnunchik
Copy link
Author

This one works too:

const index = new FlexSearch({
  tokenize: function(str){
    return str.split("");
  }
});

@ts-thomas
Copy link
Contributor

The forward tokenizer splits words via regex, but this regex will also remove cyrillic chars actually. The tokenizer needs to be enhanced to use „forward“ with non-latin content.

@mahnunchik
Copy link
Author

It would be really helpful 😉

@ts-thomas
Copy link
Contributor

ts-thomas commented Feb 21, 2019

@mahnunchik The latest version v0.6.1 has the new option split when creating an index. This makes it possible to use built-in tokenizer like "strict" or "forward" (which is required by the contextual index) and also handle word splitting separately.

var index = FlexSearch.create({
    encode: false,
    split: /\s+/,
    tokenize: "reverse"
});
index.add(0, "Фообар");
var results = index.search("Фообар");
var results = index.search("бар");
var results = index.search("Фоо");

@mahnunchik
Copy link
Author

@ts-thomas thank you! It helps me in my task.

Maybe there is sense to use more common split by default? To cover all languages which use space as separator of words.

It will be hard to find root of the problem for string like foobar αβγ with default split option /\W+/.

@ts-thomas
Copy link
Contributor

The next main release will get an improvement of handling all language-specific features. This improvement will also cover splitting words.

@tareefdev
Copy link

The mentioned solutions here helped me to make FlexSearch indexing Arabic text. Thanks guys.

@ts-thomas
Copy link
Contributor

Also take into account to use the "rtl" option for right-to-left support:

var index = FlexSearch.create({
    encode: false,
    rtl: true,
    split: /\s+/,
    tokenize: "forward"
});

@gmfmi
Copy link

gmfmi commented May 18, 2020

Hi @ts-thomas, last year you said:

The next main release will get an improvement of handling all language-specific features.

As FlexSearch is currently in 0.7.0 release candidate, I was wondering if there is some new feature about languages processing.

I recently created a project called SearchinGhost, it is an in-browser search plugins for Ghost CMS powered by FlexSearch. I am really happy with FlexSearch (thank your for the work done!) but I would like to bring multi-lang capabilities. For now, with what I read in the issues, I came up with these language default options:

Latin:

FlexSearch.create({
    encode: "simple",
    tokenize: "forward"
});

Arabic

FlexSearch.create({
    encode: false,
    rtl: true,
    split: /\s+/,
    tokenize: "forward"
});

Cyrilic, indian (any word separated by space language)

FlexSearch.create({
    encode: false,
    split: /\s+/,
    tokenize: "forward"
});

Chinese, Japanese or Korean (with dedicated chars w/o spaces)

FlexSearch.create({
    encode: false,
    tokenize: function(str){
        return str.replace(/[\x00-\x7F]/g, "").split("");
    }
});

Do you think there is any possible improvement/optimisation?

EDIT: I finally found this relevent documentation about the v0.7.0 - https://github.com/nextapps-de/flexsearch/blob/0.7.0/doc/0.7.0.md. Hope this version will be out one day :)

@EvanPartidas
Copy link

For some reason, the solution to this in typescript seems to be different. I'm just posting how I've gotten it to work in hopes it will help others.

Note: I'm using "flexsearch": "0.7.11" because without that I was facing this issue while using typescript.

I was not able to follow the FlexSearch.create(< options >) nor the new FlexSearch(< options >) paradigms I saw other's using here due to compile errors. The split field doesn't exist on the new Index(< options >)'s options type so that also caused compile errors. I tried to specify the charset as "cyrillic" and "cyrillic:default" but that also wasn't working.

I found that specifying the encode to a function of your choice is all you need to do.

It seems that the only function that actually matters is the encode function. I looked at the source and I played a bit and it seemed that when you specify an encoder, the tokenize, stemmer, and charset it doesn't affect anything.

Here's my working code:

import { Index } from "flexsearch";
import { PorterStemmerRu, WordTokenizer } from "natural";

const tokenizer = new WordTokenizer();

const index_ru = new Index({
    encode: (str) => {
        let ret = tokenizer.tokenize(str);
        for (let i = 0; i < ret.length; i++) {
            ret[i] = PorterStemmerRu.stem(ret[i]);
        }
        return ret;
    },
});

I'm using the natural library to handle the stemming.

I'm new to this library so please correct any horrible mistakes I've made!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants