Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: Passing a guided_json in offline inference #4899

Open
ccdv-ai opened this issue May 18, 2024 · 0 comments
Open

[Usage]: Passing a guided_json in offline inference #4899

ccdv-ai opened this issue May 18, 2024 · 0 comments
Labels
usage How to use vllm

Comments

@ccdv-ai
Copy link

ccdv-ai commented May 18, 2024

Your current environment

vllm 0.4.2

How would you like to use vllm

I'm trying to force a json generation using outlines in offline inference. I don't see anything related in the documentation.

I haven't found an example of chat completion for offline inference, but I've managed to mimic it using chat templates, this is why I need to force a json generation.

@ccdv-ai ccdv-ai added the usage How to use vllm label May 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
usage How to use vllm
Projects
None yet
Development

No branches or pull requests

1 participant