A cache that uses Upstash as the backing store. See https://docs.upstash.com/redis.
// Initialize the OpenAI model with Upstash Redis cache for caching responsesconst model = new ChatOpenAI({ cache: new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", }, }),}); Copy
// Initialize the OpenAI model with Upstash Redis cache for caching responsesconst model = new ChatOpenAI({ cache: new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", }, }),});
Lookup LLM generations in cache by prompt and associated LLM key.
Update the cache with the given generations.
Note this overwrites any existing generations for the given prompt and LLM key.
Generated using TypeDoc
A cache that uses Upstash as the backing store. See https://docs.upstash.com/redis.
Example