this post was submitted on 09 Jan 2025
476 points (99.2% liked)

Opensource

1630 readers
222 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] moosetwin@lemmy.dbzer0.com 13 points 3 days ago (2 children)

I don't mind the idea, but I would be curious where the training data comes from. You can't just train them off of the user's (unsubtitled) videos, because you need subtitles to know if the output is right or wrong. I checked their twitter post, but it didn't seem to help.

[–] leftytighty@slrpnk.net 17 points 3 days ago (1 children)

subtitles aren't a unique dataset it's just audio to text

[–] nova_ad_vitum@lemmy.ca 12 points 3 days ago (1 children)

They may have to give it some special training to be able to understand audio mixed by the Chris Nolan school of wtf are they saying.

[–] MDCCCLV@lemmy.ca 3 points 3 days ago (1 children)

No, if you have a center track you can just use that. Volume isn't a problem for a computer listening to it since they don't use the physical speakers.

[–] leftytighty@slrpnk.net 1 points 2 days ago

I took the other comment as a joke but this is accurate and interesting additional information!

[–] Warl0k3@lemmy.world 8 points 3 days ago

I hope they're using Open Subtitles, or one of the many academic Speech To Text datasets that exist.