AI

Ask Lex Agent

Ask Lex Agent

0
Great, we can see that the first thing the agent did was default to the “Lex Fridman DB” tool. The input to that tool was generated by the LLM, and is “What did Lex Fridman say about the future of AI?”. Link
Azure OpenAI Embeddings QnA

Azure OpenAI Embeddings QnA

0
A simple web application for a OpenAI-enabled document search. This repo uses Azure OpenAI Service for creating embeddings vectors from documents. For answering the question of a user, it retrieves the most relevant document and then uses GPT-3 to extract the matching answer for the question. Link
DoMore.ai

DoMore.ai

0
Your Personalized AI Tools Catalog Link
JARVIS

JARVIS

0
Language serves as an interface for LLMs to connect numerous AI models for solving complicated AI tasks! Link
nanoGPT

nanoGPT

0
The simplest, fastest repository for training/finetuning medium-sized GPTs. Link
ChatGPT Gets Its “Wolfram Superpowers”!

ChatGPT Gets Its “Wolfram Superpowers”!

0
To enable the functionality described here, select and install the Wolfram plugin from within ChatGPT. Note that this capability is so far available only to some ChatGPT Plus users; for more information, see OpenAI’s announcement. Link
GPT-Journey

GPT-Journey

0
Building a text and image-based journey game powered by, and with, GPT 3.5 Link
LangChain - Retrieval

LangChain - Retrieval

0
Ever since ChatGPT came out, people have been building a personalized ChatGPT for their data. We even wrote a tutorial on this, and then ran a competition about this a few months ago. The desire and demand for this highlights an important limitation of ChatGPT - it doesn’t know about YOUR data, and most people would find it more useful if it did. So how do you go about building a chatbot that knows about your data?
Lex-GPT

Lex-GPT

0
I built an app for question-answering over the full history of Lex Fridman podcasts. It uses Whisper for audio-to-text followed by Langchain for dataset processing and embedding. It uses Pinecone to store embeddings and Langchain vectorDB search to find relevant podcast clips given a user question. It uses UI elements inspired by Mckay Wrigley’s work. Code is here. Link
Prompt Engineering

Prompt Engineering

0
Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics. Link