Ever since ChatGPT came out, people have been building a personalized ChatGPT for their data. We even wrote a tutorial on this, and then ran a competition about this a few months ago. The desire and demand for this highlights an important limitation of ChatGPT - it doesn’t know about YOUR data, and most people would find it more useful if it did. So how do you go about building a chatbot that knows about your data?
I built an app for question-answering over the full history of Lex Fridman podcasts. It uses Whisper for audio-to-text followed by Langchain for dataset processing and embedding. It uses Pinecone to store embeddings and Langchain vectorDB search to find relevant podcast clips given a user question. It uses UI elements inspired by Mckay Wrigley’s work. Code is here.
Link
All production ML models need monitoring. NLP models are no exception. But, monitoring models that use text data can be quite different from, say, a model built on tabular data.
Link
Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Link
ChatGPT has reimagined the possibilities for chatbots! This webinar will show you how Botpress is reinventing the entire bot-building process with generative AI.
[Link](ChatGPT has reimagined the possibilities for chatbots! This webinar will show you how Botpress is reinventing the entire bot-building process with generative AI.)
Fixie is a platform for building applications using Large Language Models. With Fixie, you can write apps that communicate, in natural language, with one or more Agents that can access individual APIs or sources of data, such as GitHub, Google Calendar, or a database.
Link
This video will explain how to use GPT-4 api, create an evolving chat history which respects token context limits by droppping earlier conversation when necessary. We will also learn about how to stream responses to the terminal and with the Steramlit UI. We will build two different gradio UIs and also a webapp using HTML, CSS, JavaScript, Python and FastAPI.
Link
With the emergence of Large Language Models (LLMs), AI technologies have advanced to a level where humans can converse with chatbots in a way that resembles human conversation. In my opinion, chatbots are poised to become an essential component of our daily lives for a wide range of problem-solving tasks. We will soon encounter chatbots in various domains, including customer service and personal assistance.
Link
Language models are statistical methods predicting the succession of tokens in sequences, using natural text. Large language models (LLMs) are neural network-based language models with hundreds of millions (BERT) to over a trillion parameters (MiCS), and whose size makes single-GPU training impractical. LLMs’ generative abilities make them popular for text synthesis, summarization, machine translation, and more.
Link
Example open source event-driven application that generates a new bed time story for your children every night using Lambda, EventBridge, DynamoDB, App Runner, ChatGPT and DALL-E.
Link
Learn how to use React and the OpenAI API to create an application like ChatGPT. The application can answer our questions, convert the text into different languages, or even convert JavaScript code to Python.
Link
Supabase hired me to build ClippyGPT - their next generation doc search. We can ask our old friend Clippy anything you want about Supabase, and it will answer it using natural language. Powered by OpenAI + prompt engineering.
Link