Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] 导入微调后的模型,直接没法正常输出 / Concise description of the issue #4034

Open
xiaowang-xuexiji opened this issue May 16, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@xiaowang-xuexiji
Copy link

adapter_config.json以及将底座模型修改为chatglm3-6b模型
图片

环境配置中的PEFT_SHARE_BASE_WEIGHTS=true 也修改了
图片

startup.py文件中的args.model_names也增加了peft文件路径
图片
但是输出结果却为乱码
图片
MODEL_PATH配置
图片

@xiaowang-xuexiji xiaowang-xuexiji added the bug Something isn't working label May 16, 2024
Copy link

dosubot bot commented May 16, 2024

To address the issue of receiving garbled text as output after importing a fine-tuned model, consider the following steps:

  1. Verify Model Path Configuration: Double-check that the MODEL_PATH for chatglm3-6b is correctly set. Incorrect paths or misplaced model files can lead to loading issues.

  2. PEFT File Path in startup.py: Ensure the PEFT file path you've added is correct and the file is accessible. Mistakes here could interfere with model performance.

  3. PEFT_SHARE_BASE_WEIGHTS Compatibility: Confirm that setting PEFT_SHARE_BASE_WEIGHTS to true is compatible with the chatglm3-6b model. This setting indicates shared base weights, which not all models support seamlessly.

  4. conv_template Configuration: The setting of conv_template to None might be affecting input processing. Verify this setting does not conflict with the chatglm3-6b model's requirements.

  5. Model Compatibility: Ensure the chatglm3-6b model is compatible with your configuration, including any customizations in startup.py and adapter_config.json.

  6. Device Configuration: Check the device configuration (e.g., cuda, cpu) to ensure it's suitable for the chatglm3-6b model. Incompatibilities here can degrade output quality.

  7. Inspect Error Logs: Review application logs for any errors or warnings that might hint at the problem's root cause.

  8. Test with Default Configuration: Try reverting to a default configuration that previously worked, then incrementally apply your changes to isolate the issue.

For further guidance, you might want to consult the documentation or support resources for the models and libraries you're using.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant