Mac电脑本地部署chat-with-mlx
- 2024-04-07 20:45:00
- pjd
- 原创 447
1、安装环境
开源地址:https://github.com/qnguyen3/chat-with-mlx
准备好miniconda环境,我是从https://docs.anaconda.com/free/miniconda/#latest-miniconda-installer-links 下载安装的pkg安装包。
git clone https://github.com/qnguyen3/chat-with-mlx.git cd chat-with-mlx conda create -n mlx-chat python=3.11 conda activate mlx-chat pip install -e .
2、运行环境(需要代理)
运行参照以下步骤:2.1 命令行中执行引入mlx和mlx.core; 2.2 执行mlx.core查看版本的代码,以下看到我的版本是0.9.1 ;2.3 出现版本代表引入成功,执行chat-with-mlx,开始下载相关文件,并启动运行,自动启动系统默认浏览器并进入http://127.0.0.1:7860;
(mlx-chat) pjd@mbp code % cd chat-with-mlx (mlx-chat) pjd@mbp chat-with-mlx % ls LICENSE MANIFEST.in README.md assets chat_with_mlx chat_with_mlx.egg-info pyproject.toml (mlx-chat) pjd@mbp chat-with-mlx % python -c "import mlx" (mlx-chat) pjd@mbp chat-with-mlx % python -c "import mlx.core as mx" (mlx-chat) pjd@mbp chat-with-mlx % python -c "import mlx.core as mx;print(mx.__version__)" 0.9.1 (mlx-chat) pjd@mbp chat-with-mlx % chat-with-mlx <All keys matched successfully> Starting MLX Chat on port 7860 Sharing: False Running on local URL: http://127.0.0.1:7860
3、下载模型(模型较大下载可能较慢)
成功后,启动界面,选择模型,后台会下载相应模型,直至下载完成。
支 持的模型:
- Google Gemma-7b-it, Gemma-2b-it
- Mistral-7B-Instruct, OpenHermes-2.5-Mistral-7B, NousHermes-2-Mistral-7B-DPO
- Mixtral-8x7B-Instruct-v0.1, Nous-Hermes-2-Mixtral-8x7B-DPO
- Quyen-SE (0.5B), Quyen (4B)
- StableLM 2 Zephyr (1.6B)
- Vistral-7B-Chat, VBD-Llama2-7b-chat, vinallama-7b-chat
4、测试效果
本地环境还算可以,配一个官网的图示: