A shared folder with AI prompts and code snippets
From workspace: Nvidia
Team: Main
Total snippets: 6
6 snippets
Split documents into chunks, create embeddings using NVIDIA AI Endpoints, and store them locally in the /embed directory using FAISS.
def index_docs(url: Union[str, bytes], splitter, documents: List[str], dest_embed_dir) -> None: """ Split the document into chunks and create embeddings for the document Args: url: Source url for the document splitter:...
Load and chunkify Triton documentation from URLs, then generate embeddings using ecursiveCharacterTextSplitter.
def create_embeddings(embedding_path: str = "./embed"): embedding_path = "./embed" print(f"Storing embeddings to {embedding_path}") # List of web pages containing NVIDIA Triton technical documentation urls = [ ...
Load HTML content from a URL, clean it, and return plain text (used for embedding).
import re from typing import List, Union import requests from bs4 import BeautifulSoup def html_document_loader(url: Union[str, bytes]) -> str: """ Loads the HTML content of a document from a given URL and return it's content. """ ...
Prompt the user for a valid NVIDIA API key and set it as an environment variable.
import getpass if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"): nvapi_key = getpass.getpass("Enter your NVIDIA API key: ") assert nvapi_key.startswith("nvapi-"), f"{nvapi_key[:5]}... is not a valid key" ...
Import all necessary modules from LangChain and NVIDIA to build a conversational retrieval pipeline.
import os from langchain.chains import ConversationalRetrievalChain, LLMChain from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT from langchain.chains.question_answering import load_qa_chain from...
Install LangChain, NVIDIA AI Endpoints, and FAISS for vector database querying.
!pip install langchain !pip install langchain_nvidia_ai_endpoints !pip install faiss-cpu