相城做网站的公司,织梦网站地图自动更新,建设网站用模版,学校网站建设市场20240115如何在线识别俄语字幕#xff1f; 2024/1/15 21:25
百度搜索#xff1a;俄罗斯语 音频 在线识别 字幕 Bilibili#xff1a;俄语AI字幕识别
音视频转文字 字幕小工具V1.2
BING#xff1a;音视频转文字 字幕小工具V1.2 https://www.bilibili.com/video/BV1d34y1F7…20240115如何在线识别俄语字幕 2024/1/15 21:25
百度搜索俄罗斯语 音频 在线识别 字幕 Bilibili俄语AI字幕识别
音视频转文字 字幕小工具V1.2
BING音视频转文字 字幕小工具V1.2 https://www.bilibili.com/video/BV1d34y1F7qA https://www.bilibili.com/video/BV1d34y1F7qA/?p4vd_source4a6b675fa22dfa306da59f67b1f22616 音|视频转文字|字幕小工具V1.2新增whisper-large-V3模型支持100多种语言自动翻译解压即用
万能君的软件库 主要分享自己做的一些有意思的原创工具工具追求解压即用希望对您有所帮助
解压即用的音|视频转文字|字幕小工具下载地址关注 私信我字幕即可获取。 解压即用的音|视频转文字|字幕小工具下载地址关注 私信我字幕即可获取。 软件制作不易不用三连有个免费的赞就行
音视频转文字字幕小工具V1.2下载 win10、win11 1夸克网盘链接https://pan.quark.cn/s/82b36b6adfa7提取码JsyQ 2百度网盘链接https://pan.baidu.com/s/1UOV0orx6GhgMfoyETcNe0g?pwd9p2x
开发不易有条件的可以点击软件里的打赏按钮进行打赏O(∩_∩)O https://github.com/openai/whisperWhisper [Blog] [Paper] [Model card] [Colab example]
Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification.
Approach Approach
A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. These tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing a single model to replace many stages of a traditional speech-processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.
Setup We used Python 3.9.9 and PyTorch 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.8-3.11 and recent PyTorch versions. The codebase also depends on a few Python packages, most notably OpenAIs tiktoken for their fast tokenizer implementation. You can download and install (or update to) the latest release of Whisper with the following command:
pip install -U openai-whisper Alternatively, the following command will pull and install the latest commit from this repository, along with its Python dependencies:
pip install githttps://github.com/openai/whisper.git To update the package to the latest version of this repository, please run:
pip install --upgrade --no-deps --force-reinstall githttps://github.com/openai/whisper.git It also requires the command-line tool ffmpeg to be installed on your system, which is available from most package managers:
# on Ubuntu or Debian sudo apt update sudo apt install ffmpeg
# on Arch Linux sudo pacman -S ffmpeg
# on MacOS using Homebrew (https://brew.sh/) brew install ffmpeg
# on Windows using Chocolatey (https://chocolatey.org/) choco install ffmpeg
# on Windows using Scoop (https://scoop.sh/) scoop install ffmpeg You may need rust installed as well, in case tiktoken does not provide a pre-built wheel for your platform. If you see installation errors during the pip install command above, please follow the Getting started page to install Rust development environment. Additionally, you may need to configure the PATH environment variable, e.g. export PATH$HOME/.cargo/bin:$PATH. If the installation fails with No module named setuptools_rust, you need to install setuptools_rust, e.g. by running:
pip install setuptools-rust Available models and languages There are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and inference speed relative to the large model; actual speed may vary depending on many factors including the available hardware.
Size Parameters English-only model Multilingual model Required VRAM Relative speed tiny 39 M tiny.en tiny ~1 GB ~32x base 74 M base.en base ~1 GB ~16x small 244 M small.en small ~2 GB ~6x medium 769 M medium.en medium ~5 GB ~2x large 1550 M N/A large ~10 GB 1x The .en models for English-only applications tend to perform better, especially for the tiny.en and base.en models. We observed that the difference becomes less significant for the small.en and medium.en models.
Whispers performance varies widely depending on the language. The figure below shows a performance breakdown of large-v3 and large-v2 models by language, using WERs (word error rates) or CER (character error rates, shown in Italic) evaluated on the Common Voice 15 and Fleurs datasets. Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of the paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.
WER breakdown by language
Command-line usage The following command will transcribe speech in audio files, using the medium model:
whisper audio.flac audio.mp3 audio.wav --model medium The default setting (which selects the small model) works well for transcribing English. To transcribe an audio file containing non-English speech, you can specify the language using the --language option:
whisper japanese.wav --language Japanese Adding --task translate will translate the speech into English:
whisper japanese.wav --language Japanese --task translate Run the following to view all available options:
whisper --help See tokenizer.py for the list of all available languages.
Python usage Transcription can also be performed within Python:
import whisper
model whisper.load_model(base) result model.transcribe(audio.mp3) print(result[text]) Internally, the transcribe() method reads the entire file and processes the audio with a sliding 30-second window, performing autoregressive sequence-to-sequence predictions on each window.
Below is an example usage of whisper.detect_language() and whisper.decode() which provide lower-level access to the model.
import whisper
model whisper.load_model(base)
# load audio and pad/trim it to fit 30 seconds audio whisper.load_audio(audio.mp3) audio whisper.pad_or_trim(audio)
# make log-Mel spectrogram and move to the same device as the model mel whisper.log_mel_spectrogram(audio).to(model.device)
# detect the spoken language _, probs model.detect_language(mel) print(fDetected language: {max(probs, keyprobs.get)})
# decode the audio options whisper.DecodingOptions() result whisper.decode(model, mel, options)
# print the recognized text print(result.text) More examples Please use the Show and tell category in Discussions for sharing more example usages of Whisper and third-party extensions such as web demos, integrations with other tools, ports for different platforms, etc.
License Whispers code and model weights are released under the MIT License. See LICENSE for further details. 百度搜索whisper ubuntuhttps://blog.csdn.net/huiguo_/article/details/133382558 ubuntu使用whisper和funASR-语者分离-二值化
https://blog.csdn.net/yangyi139926/article/details/135110390 ubuntu16.04安装语音识别whisper及whisper-ctranslate2工具填坑篇
https://zhuanlan.zhihu.com/p/664661510 基于arm架构图为智盒T906Gubuntu20.04搭建open-ai Whisper并实现语音转文字
https://www.ncnynl.com/archives/202310/6051.html ROS2与语音交互教程-利用whisper实现ros2下发布语音转文字话题 参考资料 https://www.bilibili.com/video/BV14C4y1F7YM https://www.bilibili.com/video/BV14C4y1F7YM/?spm_id_from333.337.search-card.all.clickvd_source4a6b675fa22dfa306da59f67b1f22616 音频视频转换字幕支持100多种语言识别与翻译支持离线
这款音频视频转字幕工具支持100多种语言识别与翻译翻译识别的语言支持英语、日语、韩语、德语、俄语等等支持纯离线运行。 这款音频视频转字幕工具基于openAI的whisper的衍生项目faster whisper而做的操作简单转换完成后输出目录会生成srt和TXT的字幕格式文本。 https://www.bilibili.com/video/BV1WR4y1e7Fh/?spm_id_from333.337.search-card.all.clickvd_source4a6b675fa22dfa306da59f67b1f22616 沙拉俄语·字幕插件如何在手机和电脑上使用 俄语 音频 识别 https://www.bilibili.com/read/cv17827622/ 俄语学习俄语音视频转文字vlc player 字幕专家 【收费】 https://gglot.com/zh/russian-subtitles/ 俄语字幕 准确的俄语字幕轻松在线生成 【免费的工具额外收费了】 https://www.98dw.com/102.html https://www.bilibili.com/read/cv28458016/?jump_opus1 音视频转字幕小工具V1.2支持上百种语言翻译神器
基于openAI的whisper的衍生项目faster whisper做成支持100多种语言识别与翻译。 软件纯离线运行
1、软件的界面很简单操作步骤也说的很清楚了 2、转换完成后输出目录会有srt字幕格式和txt纯文本格式。 3、测试一些视频语音翻译的字幕效果截图 翻译识别语言涉及到了日语、英语、韩语、俄语、德语等。