A shared folder with AI prompts and code snippets
From workspace: Nvidia
Team: Main
Total snippets: 6
6 snippets
Converts the loaded VectorStoreIndex into a query engine and asks a natural language question about the dataset.
# Setup index query engine using LLM query_engine = index.as_query_engine() # Test out a query in natural language response = query_engine.query("who is the director of the movie Titanic?") response.metadata response.response
Loads documents from a local directory using SimpleDirectoryReader and creates a VectorStoreIndex with the current service context (LLM + embeddings).
# Create query engine with cross encoder reranker from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext import torch documents = SimpleDirectoryReader("./toy_data").load_data() index =...
Wraps the NVIDIA LLM (llm) and the embedding (li_embedding) into ServiceContext and sets it globally for LlamaIndex to use.
# Bring in stuff to change service context from llama_index import set_global_service_context from llama_index import ServiceContext # Create new service context instance service_context = ServiceContext.from_defaults( chunk_size=1024, ...
Loads the nvolveqa_40k embedding model from NVIDIA endpoints and wraps it into a LlamaIndex-compatible embedding instance.
# Create and dl embeddings instance wrapping huggingface embedding into langchain embedding # Bring in embeddings wrapper from llama_index.embeddings import LangchainEmbedding from langchain_nvidia_ai_endpoints import...
Runs a test call to the NVIDIA mixtral_8x7b endpoint using LangChain to verify your API key and model setup.
# test run and see that you can generate a respond successfully from langchain_nvidia_ai_endpoints import ChatNVIDIA llm = ChatNVIDIA(model="mixtral_8x7b", nvidia_api_key=nvapi_key) result = llm.invoke("Write a ballad about...
Checks if a valid nvapi- key is present in the environment. If not, prompts the user to input it securely using getpass.
import getpass import os ## API Key can be found by going to NVIDIA NGC -> AI Foundation Models -> (some model) -> Get API Code or similar. ## 10K free queries to any endpoint (which is a lot actually). # del os.environ['NVIDIA_API_KEY'] ##...