cross-posted from: https://lemm.ee/post/61282397
Open sourcing this project I made in just a weekend, planning to continue this in my free time, with synthetic data gen and some more modifications, anyone is welcome to chip in, I’m not an expert in ML. The inference is live here using tensorflow.js. The model is just 1.92 Megabytes!
Great effort! What do you propose to do with the joint letters that are a peculiarity of devnagari?
thanks a lot! I think, not only the joint letters but the diacritics is so diverse, and it is a shame that we don’t have any dataset covering this language and it’s diacritic combinations. Honestly the possibilities are infinite and i don’t know how we can generalize a model for this. It is surely possible but i’m not as experienced in ML. I’d really like to get ideas on this. Talking about dataset, I think im gonna do something about diacritics included dataset in the future. I have plans but not the time to execute it to its fullest, and also that the response and impact is very less.
I can imagine the challenges that you describe. It is only through efforts like yours people will feel encouraged to produce better training datasets. I came across this dataset that has words with diacritics (though I’m not sure if it’s right to call them that since they are not accent marks) that seems to be different from the dataset that you are using: https://cvit-iiit-ac-in.translate.goog/research/projects/cvit-projects/indic-hw-data?_x_tr_sl=en&_x_tr_tl=hi&_x_tr_hl=hi&_x_tr_pto=tc
I can read/write hindi/devnagari well and am willing to help in anyway it may be possible for any incremental progress in this domain.
I’ve worked on this topic a lot, did it once last year and this year being the above update. Also, just pushed major update to the website for a cool thing: https://dcda-v2.vercel.app/ please check it out again! Well the thing is, I really don’t have the motivation to work on this because this requires a large community effort to gather a meaningful count of data, and also from ML perspective, is it worth the effort? Like you’d have to take in the complexity of the hindi language itself, suppose i train the model to include the maatras, still would a model be able to identify two characters side by side conjoined by the line with the maatras? I mean if someone convinces me that this kind of dataset would have VERY much value in terms of contribution to digitization of the language and its ecosystem, and if it proves to be extremely useful for future researchers, then sure I’m down to work on it. And the implementation I’m thinking of is really really easy to implement, and we would not have to sit for hours writing samples on our own. We can distribute the task to the crowd but my idea of data collection would be getting people in person to write a few letters on a piece of paper and using cv to crop them out from the marked rectangles. I’m dumbing down the explanation but yeah it would require CV and markers. I can even collect data from the web app itself but not many people would chip in. I’m not exceptionally famous or have a huge following where I can get thousands of inputs in a few days/weeks/months. With the network I have, it would maybe take years to get meaningful variety of data, and im talking about the base characters without maatras.
sorry for large rant but yeah, i’m really not motivated to work on this but I do have the idea/ plan. I’d love to hand the torch to some newcomer or an enthusiast in ML to do it or someone who’s more into it than me right now.
I get all your points and I think they are the reason this has not been solved yet. But at times like this, i take inspiration from the story of first version of Captcha that, I think, Yahoo! created. The simplicity of using two words, one known and the other unknown to practically get all-printed-words-ever transcribed is nothing short of awe inspiring. If the Indian government were to put all words in regional languages as a part of Indian version of such Captcha just to book tickets on Indian railways then the entirety of regional language text could be transcribed before we know it, besides giving valuable training datasets for ML/DL models too.
Nonetheless, i wish you the very best in your endeavours.