Skip to content
This repository has been archived by the owner on Mar 30, 2024. It is now read-only.

llama-2-70B #17

Answered by chenhunghan
ranjanshivaji asked this question in Q&A
Discussion options

You must be logged in to vote

There is some discussion on Reddit regarding this https://www.reddit.com/r/LocalLLaMA/comments/12vo2rn/ggml/ basically 4x less RAM but lower quality.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by chenhunghan
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants