chattts一步步的记录,先跑起来。

0.下载git工具

Git - Downloads (git-scm.com)https://git-scm.com/downloads

Download – TortoiseGit – Windows Shell Interface to Githttps://tortoisegit.org/download/

1.安装 随意,可以安汉化,也可不安。无所谓

 2.建个目录,我的上I:chat_kimi,你随意

3.打开官方

https://github.com/2noise/ChatTTS

拉取链接 

https://github.com/2noise/ChatTTS.git

 

4.假设你已经安装了conda.

 conda create --name chat_kimi python=3.11
conda activate chat_kimi

如下即可:

或者通过界面方式创建也可。

 5安装所要的文件,仅参考

(base) PS C:\Users\dell> conda activate chat_kimi
(chat_kimi) PS C:\Users\dell> cd I:\chat_kimi
(chat_kimi) PS I:\chat_kimi> cd chattts

用国内的源: 

pip install -r requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple

 

6.模型Hf国内源,有一个位置是修改这个自动下载地址,等我找到后再更新到这。

HF-Mirrorhttps://hf-mirror.com/结构如

新建一个文件夹 models.或者,直接放在 chattts目录下,如:第二图:第二图的好处是所有代码不用修改了,直接用。

 

将网站上的文件全部下载到对应的位置

下载后,注意个别文件的主文件名,给加上了路径名了,一定要改回去。

对应下载到相应的地方。大约20分钟左右。

7.修改模型路径位置。如果是第一图的话.

注意这个位置。 

7.运行一下。

(chat_kimi) PS I:\chat_kimi\ChatTTS> python examples\web\web.py
C:\Users\dell\.conda\envs\chat_kimi\python.exe: can't open file 'I:\\chat_kimi\\ChatTTS\\examples\\web\\web.py': [Errno 2] No such file or directory
(chat_kimi) PS I:\chat_kimi\ChatTTS> python examples\web\webui.py
[+0800 20241013 17:03:24] [WARN]  WebUI  | funcs | no ffmpeg installed, use wav file output
[+0800 20241013 17:03:24] [INFO]  WebUI  | webui | loading ChatTTS model...
[+0800 20241013 17:03:24] [INFO] ChatTTS | dl | checking assets...
[+0800 20241013 17:03:25] [INFO] ChatTTS | dl | all assets are already latest.
[+0800 20241013 17:03:25] [WARN] ChatTTS | gpu | no GPU found, use CPU instead
[+0800 20241013 17:03:25] [INFO] ChatTTS | core | use device cpu
[+0800 20241013 17:03:25] [INFO] ChatTTS | core | vocos loaded.
[+0800 20241013 17:03:25] [INFO] ChatTTS | core | dvae loaded.
[+0800 20241013 17:03:26] [INFO] ChatTTS | core | embed loaded.
[+0800 20241013 17:03:26] [INFO] ChatTTS | core | gpt loaded.
[+0800 20241013 17:03:26] [INFO] ChatTTS | core | speaker loaded.
[+0800 20241013 17:03:26] [INFO] ChatTTS | core | decoder loaded.
[+0800 20241013 17:03:26] [INFO] ChatTTS | core | tokenizer loaded.
[+0800 20241013 17:03:26] [WARN]  WebUI  | funcs | Package nemo_text_processing not found!
[+0800 20241013 17:03:26] [WARN]  WebUI  | funcs | Run: conda install -c conda-forge pynini=2.1.5 && pip install nemo_text_processing
[+0800 20241013 17:03:26] [WARN]  WebUI  | funcs | Package WeTextProcessing not found!
[+0800 20241013 17:03:26] [WARN]  WebUI  | funcs | Run: conda install -c conda-forge pynini=2.1.5 && pip install WeTextProcessing
[+0800 20241013 17:03:26] [INFO]  WebUI  | webui | Models loaded successfully.
* Running on local URL:  http://0.0.0.0:8080

To create a public link, set `share=True` in `launch()`.
text:   0%|▏                                                                            | 1/384(max) [00:00,  3.95it/s]We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class (https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)
text:  17%|█████████████▎                                                              | 67/384(max) [00:05, 13.30it/s]
code:  24%|█████████████████▉                                                        | 495/2048(max) [00:31, 15.72it/s]

8.gpu竟然不干活。发生了什么?

 CUDA Toolkit Archive | NVIDIA Developerhttps://developer.nvidia.com/cuda-toolkit-archive

 CUDA安装教程(超详细)-****博客https://blog.****.net/m0_45447650/article/details/123704930?ops_request_misc=%257B%2522request%255Fid%2522%253A%252229DBE75F-820D-4C34-94EB-A83EC55EC789%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=29DBE75F-820D-4C34-94EB-A83EC55EC789&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~top_positive~default-1-123704930-null-null.142%5Ev100%5Epc_search_result_base5&utm_term=cuda%E5%AE%89%E8%A3%85&spm=1018.2226.3001.4187

 

 

cuda安装
安装cuda时,第一次会让设置临时解压目录,第二次会让设置安装目录;

临时解压路径,建议默认即可,也可以自定义。安装结束后,临时解压文件夹会自动删除;

安装目录,建议默认即可;

注意:临时解压目录千万不要和cuda的安装路径设置成一样的,否则安装结束,会找不到安装目录的!!!

选择自定义安装

安装完成后,配置cuda的环境变量;

命令行中,测试是否安装成功;

双击“exe文件”,选择下载路径(推荐默认路径)

验证:

 cuDNN Archive | NVIDIA Developerhttps://developer.nvidia.com/rdp/cudnn-archive

 最详细!Windows下的CUDA与cuDNN详细安装教程_windows安装cuda和cudnn-****博客https://blog.****.net/weixin_52677672/article/details/135853106?ops_request_misc=%257B%2522request%255Fid%2522%253A%252229DBE75F-820D-4C34-94EB-A83EC55EC789%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=29DBE75F-820D-4C34-94EB-A83EC55EC789&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~top_positive~default-2-135853106-null-null.142%5Ev100%5Epc_search_result_base5&utm_term=cuda%E5%AE%89%E8%A3%85&spm=1018.2226.3001.4187

 

Win10安装ChatTTS-2024-cuda10.1_window 10 chattts安装-****博客https://blog.****.net/counsellor/article/details/141437597?ops_request_misc=&request_id=&biz_id=102&utm_term=chattts%20pip%20cuda&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~default-2-141437597.142%5Ev100%5Epc_search_result_base5&spm=1018.2226.3001.4187

conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia

 漫长的等待。注意硬盘的空间。

(chat_kimi) PS I:\chat_kimi\ChatTTS> conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia
Channels:
 - pytorch
 - nvidia
 - defaults
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: C:\Users\dell\.conda\envs\chat_kimi

  added / updated specs:
    - pytorch-cuda=12.4
    - pytorch==2.4.0
    - torchaudio==2.4.0
    - torchvision==0.19.0


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    pytorch-2.4.0              |py3.11_cuda12.4_cudnn9_0        1.20 GB  pytorch
    ------------------------------------------------------------
                                           Total:        1.20 GB

The following NEW packages will be INSTALLED:

  blas               pkgs/main/win-64::blas-1.0-mkl
  brotli-python      pkgs/main/win-64::brotli-python-1.0.9-py311hd77b12b_8
  certifi            pkgs/main/win-64::certifi-2024.8.30-py311haa95532_0
  charset-normalizer pkgs/main/noarch::charset-normalizer-3.3.2-pyhd3eb1b0_0
  cuda-cccl          nvidia/win-64::cuda-cccl-12.6.77-0
  cuda-cccl_win-64   nvidia/noarch::cuda-cccl_win-64-12.6.77-0
  cuda-cudart        nvidia/win-64::cuda-cudart-12.4.127-0
  cuda-cudart-dev    nvidia/win-64::cuda-cudart-dev-12.4.127-0
  cuda-cupti         nvidia/win-64::cuda-cupti-12.4.127-0
  cuda-libraries     nvidia/win-64::cuda-libraries-12.4.0-0
  cuda-libraries-dev nvidia/win-64::cuda-libraries-dev-12.4.0-0
  cuda-nvrtc         nvidia/win-64::cuda-nvrtc-12.4.127-0
  cuda-nvrtc-dev     nvidia/win-64::cuda-nvrtc-dev-12.4.127-0
  cuda-nvtx          nvidia/win-64::cuda-nvtx-12.4.127-0
  cuda-opencl        nvidia/win-64::cuda-opencl-12.6.77-0
  cuda-opencl-dev    nvidia/win-64::cuda-opencl-dev-12.6.77-0
  cuda-profiler-api  nvidia/win-64::cuda-profiler-api-12.6.77-0
  cuda-runtime       nvidia/win-64::cuda-runtime-12.4.0-0
  cuda-version       nvidia/noarch::cuda-version-12.6-3
  filelock           pkgs/main/win-64::filelock-3.13.1-py311haa95532_0
  freetype           pkgs/main/win-64::freetype-2.12.1-ha860e81_0
  gmpy2              pkgs/main/win-64::gmpy2-2.1.2-py311h7f96b67_0
  idna               pkgs/main/win-64::idna-3.7-py311haa95532_0
  intel-openmp       pkgs/main/win-64::intel-openmp-2023.1.0-h59b6b97_46320
  jinja2             pkgs/main/win-64::jinja2-3.1.4-py311haa95532_0
  jpeg               pkgs/main/win-64::jpeg-9e-h827c3e9_3
  lcms2              pkgs/main/win-64::lcms2-2.12-h83e58a3_0
  lerc               pkgs/main/win-64::lerc-3.0-hd77b12b_0
  libcublas          nvidia/win-64::libcublas-12.4.2.65-0
  libcublas-dev      nvidia/win-64::libcublas-dev-12.4.2.65-0
  libcufft           nvidia/win-64::libcufft-11.2.0.44-0
  libcufft-dev       nvidia/win-64::libcufft-dev-11.2.0.44-0
  libcurand          nvidia/win-64::libcurand-10.3.7.77-0
  libcurand-dev      nvidia/win-64::libcurand-dev-10.3.7.77-0
  libcusolver        nvidia/win-64::libcusolver-11.6.0.99-0
  libcusolver-dev    nvidia/win-64::libcusolver-dev-11.6.0.99-0
  libcusparse        nvidia/win-64::libcusparse-12.3.0.142-0
  libcusparse-dev    nvidia/win-64::libcusparse-dev-12.3.0.142-0
  libdeflate         pkgs/main/win-64::libdeflate-1.17-h2bbff1b_1
  libjpeg-turbo      pkgs/main/win-64::libjpeg-turbo-2.0.0-h196d8e1_0
  libnpp             nvidia/win-64::libnpp-12.2.5.2-0
  libnpp-dev         nvidia/win-64::libnpp-dev-12.2.5.2-0
  libnvfatbin        nvidia/win-64::libnvfatbin-12.6.77-0
  libnvfatbin-dev    nvidia/win-64::libnvfatbin-dev-12.6.77-0
  libnvjitlink       nvidia/win-64::libnvjitlink-12.4.99-0
  libnvjitlink-dev   nvidia/win-64::libnvjitlink-dev-12.4.99-0
  libnvjpeg          nvidia/win-64::libnvjpeg-12.3.1.89-0
  libnvjpeg-dev      nvidia/win-64::libnvjpeg-dev-12.3.1.89-0
  libpng             pkgs/main/win-64::libpng-1.6.39-h8cc25b3_0
  libtiff            pkgs/main/win-64::libtiff-4.5.1-hd77b12b_0
  libuv              pkgs/main/win-64::libuv-1.48.0-h827c3e9_0
  libwebp-base       pkgs/main/win-64::libwebp-base-1.3.2-h2bbff1b_0
  lz4-c              pkgs/main/win-64::lz4-c-1.9.4-h2bbff1b_1
  markupsafe         pkgs/main/win-64::markupsafe-2.1.3-py311h2bbff1b_0
  mkl                pkgs/main/win-64::mkl-2023.1.0-h6b88ed4_46358
  mkl-service        pkgs/main/win-64::mkl-service-2.4.0-py311h2bbff1b_1
  mkl_fft            pkgs/main/win-64::mkl_fft-1.3.10-py311h827c3e9_0
  mkl_random         pkgs/main/win-64::mkl_random-1.2.7-py311hea22821_0
  mpc                pkgs/main/win-64::mpc-1.1.0-h7edee0f_1
  mpfr               pkgs/main/win-64::mpfr-4.0.2-h62dcd97_1
  mpir               pkgs/main/win-64::mpir-3.0.0-hec2e145_1
  mpmath             pkgs/main/win-64::mpmath-1.3.0-py311haa95532_0
done
(chat_kimi) PS I:\chat_kimi\ChatTTS> python examples\web\webui.py
[+0800 20241014 05:22:44] [WARN]  WebUI  | funcs | no ffmpeg installed, use wav file output
[+0800 20241014 05:22:44] [INFO]  WebUI  | webui | loading ChatTTS model...
[+0800 20241014 05:22:44] [INFO] ChatTTS | dl | checking assets...
[+0800 20241014 05:22:45] [INFO] ChatTTS | dl | all assets are already latest.
[+0800 20241014 05:22:45] [INFO] ChatTTS | core | use device cuda:0
[+0800 20241014 05:22:45] [INFO] ChatTTS | core | vocos loaded.
[+0800 20241014 05:22:45] [INFO] ChatTTS | core | dvae loaded.
[+0800 20241014 05:22:46] [INFO] ChatTTS | core | embed loaded.
[+0800 20241014 05:22:47] [INFO] ChatTTS | core | gpt loaded.
[+0800 20241014 05:22:47] [INFO] ChatTTS | core | speaker loaded.
[+0800 20241014 05:22:47] [INFO] ChatTTS | core | decoder loaded.
[+0800 20241014 05:22:47] [INFO] ChatTTS | core | tokenizer loaded.
[+0800 20241014 05:22:47] [WARN]  WebUI  | funcs | Package nemo_text_processing not found!
[+0800 20241014 05:22:47] [WARN]  WebUI  | funcs | Run: conda install -c conda-forge pynini=2.1.5 && pip install nemo_text_processing
[+0800 20241014 05:22:47] [WARN]  WebUI  | funcs | Package WeTextProcessing not found!
[+0800 20241014 05:22:47] [WARN]  WebUI  | funcs | Run: conda install -c conda-forge pynini=2.1.5 && pip install WeTextProcessing
[+0800 20241014 05:22:47] [INFO]  WebUI  | webui | Models loaded successfully.
* Running on local URL:  http://0.0.0.0:8080

To create a public link, set `share=True` in `launch()`.

这速度,倍之。

* Running on local URL:  http://0.0.0.0:8080

To create a public link, set `share=True` in `launch()`.
text:   0%|                                                                                 | 0/384(max) [00:00, ?it/s]C:\Users\dell\.conda\envs\chat_kimi\Lib\site-packages\transformers\models\llama\modeling_llama.py:655: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  attn_output = torch.nn.functional.scaled_dot_product_attention(
text:   0%|▏                                                                            | 1/384(max) [00:00,  3.58it/s]We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class (https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)
text:  19%|██████████████▍                                                             | 73/384(max) [00:02, 32.76it/s]
code:  25%|██████████████████▋                                                       | 517/2048(max) [00:12, 42.36it/s]

看下时间,10秒音频,13秒完成。还可以。

下一节:从kimi中获取文本并,出声。

上一篇:【Java数据结构】二叉树