outer_spec@lemmy.blahaj.zone to 196@lemmy.blahaj.zone · 2 years agoRuletaniclemmy.blahaj.zoneimagemessage-square7fedilinkarrow-up1177arrow-down10
arrow-up1177arrow-down1imageRuletaniclemmy.blahaj.zoneouter_spec@lemmy.blahaj.zone to 196@lemmy.blahaj.zone · 2 years agomessage-square7fedilink
minus-squareNorah (pup/it/she)@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up2·2 years agoHope you like 40 second response times unless you use a GPU model.
minus-squareJDubbleu@programming.devlinkfedilinkarrow-up10·2 years agoI’ve hosted one on a raspberry pi and it took at most a second to process and act on commands. Basic speech to text doesn’t require massive models and has become much less compute intensive in the past decade.
minus-squareNorah (pup/it/she)@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up2·2 years agoOkay well I was running faster-whisper through Home Assistant.
Hope you like 40 second response times unless you use a GPU model.
I’ve hosted one on a raspberry pi and it took at most a second to process and act on commands. Basic speech to text doesn’t require massive models and has become much less compute intensive in the past decade.
Okay well I was running faster-whisper through Home Assistant.