Fresneda García, Julio2024-06-112024-06-112024-02https://hdl.handle.net/20.500.14468/22596With recent advances in largue language models, the evolution of speech-to-text tasks has been exponential. While state-of-the-art automatic speech recognition (ASR) models have taken a big step in speech transcription, creating quality subtitles still requires human intervention. This project has two main aspects: evaluating cutting-edge ASR models for speech-to-text, and developing a package that uses this ASR models to generate high-quality and compliant subtitles. ASR models do not inherently provide results suitable for subtitles. Therefore, one of the primary objectives of this package is to utilize and enhance the output generated by ASR models to create subtitles of a quality that requires minimal human modification. This enhancement is necessary because ASR models alone are incapable of producing subtitles that meet the required standards of quality. Speak2Subs has achieved this goal, being a tool that produces high-quality subtitles with minimal human interaction.enAtribución-NoComercial-SinDerivadas 4.0 Internacionalinfo:eu-repo/semantics/openAccessSpeak2Subs: Evaluating State-of-the-Art Speech Recognition Models and Compliant Subtitle Generationtesis de maestríaASRLLMSpeech-To-TextSubtitle