You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
Please do not modify this template :) and fill in all the required fields.
Dify version
0.6.8
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Add an application (problem classification+knowledge base+chatbot) that comes with the official website to the workspace, then modify the model inside and change it to the qwen model. Testing shows that there is an error, but the knowledge retrieval module still reads the gpt-3.5-turbo model, but the modified model cannot be found on the interface
✔️ Expected Behavior
No response
❌ Actual Behavior
No response
The text was updated successfully, but these errors were encountered:
To address the issue where your system still tries to access the gpt-3.5-turbo model instead of the modified qwen model after adding a built-in application to your Dify workspace, follow these steps:
Verify Model Configuration: Ensure the qwen-turbo model is correctly configured in your workspace settings, including the model type and provider.
Update Default Model Settings: Use the DefaultModelApi endpoint to update the default model settings to qwen-turbo, ensuring the model type (llm) and provider are correctly set.
Check Model Credentials: Ensure the qwen-turbo model's credentials are correctly saved and validated using the appropriate API endpoints. This is crucial as the error suggests an issue with credential initialization.
Review Model Provider Configuration: Confirm that the model provider for qwen-turbo is correctly configured in your workspace, verifying the association between the model and its provider.
Adjust Model Parameters: Based on your needs, adjust parameters such as temperature, max_tokens, etc., according to the qwen-turbo.yaml configuration.
Ensure Privileged Role: Make sure the user has the necessary permissions for operations like saving model credentials or deleting models.
By following these steps and ensuring all configurations and permissions are correctly set, you should be able to successfully switch to using the qwen-turbo model in your Dify workspace. For detailed instructions on model configuration and management, refer to the Dify documentation.
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
Self Checks
Dify version
0.6.8
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Add an application (problem classification+knowledge base+chatbot) that comes with the official website to the workspace, then modify the model inside and change it to the qwen model. Testing shows that there is an error, but the knowledge retrieval module still reads the gpt-3.5-turbo model, but the modified model cannot be found on the interface
✔️ Expected Behavior
No response
❌ Actual Behavior
No response
The text was updated successfully, but these errors were encountered: