LLM

Introducing Query Pipelines

Introducing Query Pipelines

0
Today we introduce Query Pipelines, a new declarative API within LlamaIndex that allows you to concisely orchestrate simple-to-advanced query workflows over your data for different use cases (RAG, structured data extraction, and more). Link Usage Pattern
Langchain + AWS

Langchain + AWS

0
Langchain + AWS Langchain + Amazon Textract Langchain + AWS DynamoDB Langchain + Bedrock (Knowledge Bases) Langchain + AWS Lambda

LangChain cookbook

0
Example code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the main documentation. Link
LangChain Templates

LangChain Templates

0
LangChain Templates are the easiest and fastest way to build a production-ready LLM application. These templates serve as a set of reference architectures for a wide variety of popular LLM use cases. They are all in a standard format which make it easy to deploy them with LangServe. Link GitHub

LlamaHub

0
Get your RAG application rolling in no time. Mix and match our Data Loaders and Agent Tools to build custom RAG apps or use our LlamaPacks as a starting point for your retrieval use cases. Link
LlamaIndex - Building a Custom Agent

LlamaIndex - Building a Custom Agent

0
We show you how to build a simple agent that adds a retry layer on top of a RouterQueryEngine, allowing it to retry queries until the task is complete. We build this on top of both a SQL tool and a vector index query tool. Even if the tool makes an error or only answers part of the question, the agent can continue retrying the question until the task is complete.
LlamaIndex - Ingestion Pipeline

LlamaIndex - Ingestion Pipeline

0
An IngestionPipeline uses a concept of Transformations that are applied to input data. These Transformations are applied to your input data, and the resulting nodes are either returned or inserted into a vector database (if given). Each node+transformation pair is cached, so that subsequent runs (if the cache is persisted) with the same node+transformation combination can use the cached result and save you time. Link LlamaPack
LLMCompiler: An LLM Compiler for Parallel Function Calling

LLMCompiler: An LLM Compiler for Parallel Function Calling

0
LLMCompiler is a framework that enables an efficient and effective orchestration of parallel function calling with LLMs, including both open-source and close-source models, by automatically identifying which tasks can be performed in parallel and which ones are interdependent. [Link]{https://github.com/SqueezeAILab/LLMCompiler} LLM Compiler Agent Cookbook