[01:44:18] *** Quits: Horwitz (~mich1@p200300ec9f044900022268fffe64e7c4.dip0.t-ipconnect.de) (Ping timeout: 240 seconds) [01:56:09] *** Joins: Horwitz (~mich1@p200300ec9f29a100022268fffe64e7c4.dip0.t-ipconnect.de) [01:56:09] *** ChanServ sets mode: +o Horwitz [21:45:49] I would also prefer an on-device option [21:45:55] do you know kaldi asr? [21:46:15] was meant for @If [22:31:47] @ilovekiruna, we looked at it but it has the same problem as CMUSphinx, you have to build your own speech models and that takes time and expertise. [22:32:51] but isnt it the only choice if it should be on the same device [22:33:11] I mean googlestt is also not on the same device [22:33:31] could be I misunderstand something, so sorry if I am annoying [22:39:32] No, you are correct. When we assessed STT options we found that 100% on-device solutions just did not work effectively. Like you we would have preferred a local (on-device) solution but at this time we do not believe they are of use at this time. We also looked at Deep Speech, but that requires so much horsepower that it is not suitable for mobile devices. One of the reasons we chose Kalliope as our framework is the [22:39:32] ability to plug and play with different services so if an effective on-devices STT comes along we will pop it in. [23:03:09] was asking about kaldi because a friend fo mine uses it in a project [23:03:31] good to know that others also got similar results when doing evaluation [23:03:50] is deep speech really to demanding for embedded? [23:04:06] I know there are tensorflow implementations for lower power devices