[TOC]

  1. Title: Langchain Use Cases 2023
  2. Review Date: Sat, Aug 26, 2023
  3. url: https://python.langchain.com/docs/get_started/quickstart

Langchain quickstart

  • The core building block of LangChain applications is the LLMChain. This combines three things:
    • LLM: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.
    • Prompt Templates: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial.
    • Output Parsers: These translate the raw response from the LLM to a more workable format, making it easy to use the output downstream.

PromptTemplate

  • modify prompt format easily

Chains: Combine LLMs and prompts in multi-step workflows

1
2
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)

Agents: dynamically call chains based on user input

1
2
3
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI

we can connect Google AI with ChatGPT

  • we load language model, some tools to use and finally initialise an agent with
    • the tools
    • the language model
    • the type of agent we want to use

memory: add state to chains and agents

  • conversation chain, it will save all the previous conversation.

Langchain Schema

Chat Messages

  • like text, but specified with a message type (System, Human and AI)
    • System, helpful background context that tell the AI what to do
    • Human, messages that are intended to represent the user
    • AI - messages that show what the AI responded with

Document

  • load documents to language model

Langchain model

Language model

  • text in text out model

Chat model

  • a model takes a series of messages and returns a message output
  • the memory is explicitly shown in chat schema

Text embedding model

  • change your text into a vector
  • FAISS can be used as a retriever to get relevant documents

Prompt

  • prompt template: generate prompts based on the combination of user input (i.e., put place holders)

example selector

  • an easy way to select from a series of examples that allows for dynamically placing in-context information into your prompt

SemanticSimilarityExampleSelector

  • we need a VectorStore class that is used to store the embeddings and do similarity check
    • FAISS is the default

image-20230826182346469

Output Parsers

  • a helpful way to format the output of a model. Usually used for structured output
  • two big concepts
    • Format instructions - A autogenerated prompt that tells LLM how to format its response based off your desired result
    • Parser - A method which will extract output into a desired structure (usually json)

Indexes – Structuring documents to LLMs

Document loaders

  • load documents from online source

Text splitter

  • often times your document is too long for your LLM, you need to split it into chunks
  • Text splitters help with this

Memory

  • a common one is chathistory
  • from langchain.memory import ChatMessageHistory

image-20230827112610400

Chains

  • combine different LLMs calls and output

Simple sequential chain

  • decompose the task into each step
  • feed the output of previous LLM to the following prompt

Summarisation chain

  • map reduce or different types of chain type

Agents

image-20230827113142482

  • an agent is the language model that drives decision making
  • agents are making automatic decisions

Extraction

  • extraction is the term for describing extract useful information from natural language information and parse it into structured format
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# To help construct chat message 
from langchain.schema import HumanMessage
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate

# to parse output and get structured data back
from langchain.output_parsers import StructuredOutputParser, ResponseSchema

output_parser = StructuredOutputParser.from_response_schemas(response_schema)
# the format instructions are LangChain makes, 
format_instructions = output_parser.get_format_instructions()

# the format_instruction will be refered as partial_variables in PromptTemplate

Question Answering

  • sources
    • allows you to return the source document that is essential for the output.
1
qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type='stuff', vectorstore=docsearch, return_source_document=True)

NL Info to PDDL configs