Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs:Add Cohere examples in documentation #17794

Merged
merged 3 commits into from
Feb 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
54 changes: 48 additions & 6 deletions docs/docs/get_started/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -65,10 +65,10 @@ We will link to relevant docs.

## LLM Chain

For this getting started guide, we will provide two options: using OpenAI (a popular model available via API) or using a local open source model.
We'll show how to use models available via API, like OpenAI and Cohere, and local open source models, using integrations like Ollama.

<Tabs>
<TabItem value="openai" label="OpenAI" default>
<TabItem value="openai" label="OpenAI (API)" default>

First we'll need to import the LangChain x OpenAI integration package.

Expand Down Expand Up @@ -99,7 +99,7 @@ llm = ChatOpenAI(openai_api_key="...")
```

</TabItem>
<TabItem value="local" label="Local">
<TabItem value="local" label="Local (using Ollama)">

[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.

Expand All @@ -112,6 +112,37 @@ Then, make sure the Ollama server is running. After that, you can do:
```python
from langchain_community.llms import Ollama
llm = Ollama(model="llama2")
```

</TabItem>
<TabItem value="cohere" label="Cohere (API)" default>

First we'll need to import the Cohere SDK package.

```shell
pip install cohere
```

Accessing the API requires an API key, which you can get by creating an account and heading [here](https://dashboard.cohere.com/api-keys). Once we have a key we'll want to set it as an environment variable by running:

```shell
export COHERE_API_KEY="..."
```

We can then initialize the model:

```python
from langchain_community.chat_models import ChatCohere

llm = ChatCohere()
```

If you'd prefer not to set an environment variable you can pass the key in directly via the `cohere_api_key` named parameter when initiating the Cohere LLM class:

```python
from langchain_community.chat_models import ChatCohere

llm = ChatCohere(cohere_api_key="...")
```

</TabItem>
Expand Down Expand Up @@ -200,10 +231,10 @@ docs = loader.load()

Next, we need to index it into a vectorstore. This requires a few components, namely an [embedding model](/docs/modules/data_connection/text_embedding) and a [vectorstore](/docs/modules/data_connection/vectorstores).

For embedding models, we once again provide examples for accessing via OpenAI or via local models.
For embedding models, we once again provide examples for accessing via API or by running local models.

<Tabs>
<TabItem value="openai" label="OpenAI" default>
<TabItem value="openai" label="OpenAI (API)" default>

Make sure you have the `langchain_openai` package installed an the appropriate environment variables set (these are the same as needed for the LLM).

Expand All @@ -214,7 +245,7 @@ embeddings = OpenAIEmbeddings()
```

</TabItem>
<TabItem value="local" label="Local">
<TabItem value="local" label="Local (using Ollama)">

Make sure you have Ollama running (same set up as with the LLM).

Expand All @@ -224,6 +255,17 @@ from langchain_community.embeddings import OllamaEmbeddings
embeddings = OllamaEmbeddings()
```
</TabItem>
<TabItem value="cohere" label="Cohere (API)" default>

Make sure you have the `cohere` package installed an the appropriate environment variables set (these are the same as needed for the LLM).

```python
from langchain_community.embeddings import CohereEmbeddings

embeddings = CohereEmbeddings()
```

</TabItem>
</Tabs>

Now, we can use this embedding model to ingest documents into a vectorstore.
Expand Down
38 changes: 38 additions & 0 deletions docs/docs/modules/data_connection/text_embedding/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,11 @@ The base Embeddings class in LangChain provides two methods: one for embedding d

### Setup

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

<Tabs>
<TabItem value="openai" label="OpenAI" default>
To start we'll need to install the OpenAI partner package:

```bash
Expand Down Expand Up @@ -44,6 +49,39 @@ from langchain_openai import OpenAIEmbeddings
embeddings_model = OpenAIEmbeddings()
```

</TabItem>
<TabItem value="cohere" label="Cohere">

To start we'll need to install the Cohere SDK package:

```bash
pip install cohere
```

Accessing the API requires an API key, which you can get by creating an account and heading [here](https://dashboard.cohere.com/api-keys). Once we have a key we'll want to set it as an environment variable by running:

```shell
export COHERE_API_KEY="..."
```

If you'd prefer not to set an environment variable you can pass the key in directly via the `cohere_api_key` named parameter when initiating the Cohere LLM class:

```python
from langchain_community.embeddings import CohereEmbeddings

embeddings_model = CohereEmbeddings(cohere_api_key="...")
```

Otherwise you can initialize without any params:
```python
from langchain_community.embeddings import CohereEmbeddings

embeddings_model = CohereEmbeddings()
```

</TabItem>
</Tabs>

### `embed_documents`
#### Embed list of texts

Expand Down
33 changes: 32 additions & 1 deletion docs/docs/modules/model_io/quick_start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ llm = ChatOpenAI(openai_api_key="...")
```

</TabItem>
<TabItem value="local" label="Local">
<TabItem value="local" label="Local (using Ollama)">

[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.

Expand All @@ -62,6 +62,37 @@ from langchain_community.chat_models import ChatOllama

llm = Ollama(model="llama2")
chat_model = ChatOllama()
```

</TabItem>
<TabItem value="cohere" label="Cohere">

First we'll need to install their partner package:

```shell
pip install cohere
```

Accessing the API requires an API key, which you can get by creating an account and heading [here](https://dashboard.cohere.com/api-keys). Once we have a key we'll want to set it as an environment variable by running:

```shell
export COHERE_API_KEY="..."
```

We can then initialize the model:

```python
from langchain_community.chat_models import ChatCohere

llm = ChatCohere()
```

If you'd prefer not to set an environment variable you can pass the key in directly via the `cohere_api_key` named parameter when initiating the Cohere LLM class:

```python
from langchain_community.chat_models import ChatCohere

llm = ChatCohere(cohere_api_key="...")
```

</TabItem>
Expand Down