Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After adding the built-in application on the official website to the workspace, the model modification cannot be completed #4505

Open
4 tasks done
mamacmm opened this issue May 19, 2024 · 1 comment
Labels
🐞 bug Something isn't working

Comments

@mamacmm
Copy link

mamacmm commented May 19, 2024

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.6.8

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

Add an application (problem classification+knowledge base+chatbot) that comes with the official website to the workspace, then modify the model inside and change it to the qwen model. Testing shows that there is an error, but the knowledge retrieval module still reads the gpt-3.5-turbo model, but the modified model cannot be found on the interface

image

✔️ Expected Behavior

No response

❌ Actual Behavior

No response

Copy link

dosubot bot commented May 19, 2024

To address the issue where your system still tries to access the gpt-3.5-turbo model instead of the modified qwen model after adding a built-in application to your Dify workspace, follow these steps:

  1. Verify Model Configuration: Ensure the qwen-turbo model is correctly configured in your workspace settings, including the model type and provider.

  2. Update Default Model Settings: Use the DefaultModelApi endpoint to update the default model settings to qwen-turbo, ensuring the model type (llm) and provider are correctly set.

  3. Check Model Credentials: Ensure the qwen-turbo model's credentials are correctly saved and validated using the appropriate API endpoints. This is crucial as the error suggests an issue with credential initialization.

  4. Review Model Provider Configuration: Confirm that the model provider for qwen-turbo is correctly configured in your workspace, verifying the association between the model and its provider.

  5. Adjust Model Parameters: Based on your needs, adjust parameters such as temperature, max_tokens, etc., according to the qwen-turbo.yaml configuration.

  6. Ensure Privileged Role: Make sure the user has the necessary permissions for operations like saving model credentials or deleting models.

By following these steps and ensuring all configurations and permissions are correctly set, you should be able to successfully switch to using the qwen-turbo model in your Dify workspace. For detailed instructions on model configuration and management, refer to the Dify documentation.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@dosubot dosubot bot added the 🐞 bug Something isn't working label May 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant