01 Models IO
01 Models IO
● Chat Models
○ Chat Models have a series of messages, just
like a chat text thread, except one side of the
conversation is an AI LLM.
○ Langchain creates 3 schema objects for this:
■ SystemMessage
● General system tone or personality
■ HumanMessage
● Human request or reply
■ AIMessage: AI’s reply (more on this later!)
LangChain
● Chat Models
LangChain
● Chat Models
LangChain
● Chat Models
○ In this lecture we’ll explore how to expand
LLM integrations to chat models with
Langchain and how to pass in extra
parameters or arguments (for example
increasing temperature).
○ We will also see a great caching feature to
save replies in memory for common requests!
Prompt Templates
LangChain
● Prompt Templates
○ Templates allow us to easily configure and
modify our input prompts to LLM calls.
○ Templates offer a more systematic approach
to passing in variables to prompts for models,
instead of using f-string literals or .format()
calls, the PromptTemplate converts these
into function parameter names that we can
pass in.
LangChain
● Prompt Templates
○ Let’s explore how to build prompt templates
for both LLM Text Completion and prompt
templates for Chat Models.
Prompts and Models
Exercise
LangChain
● Parsing Outputs
○ Often when connecting LLM output you need
it in a particular format, for example, you
want a python datetime object, or a JSON
object.
○ LangChain comes with Parse utilities allowing
you to easily convert outputs into precise
data types or even your own custom class
instances with Pydantic.
LangChain
● Parsers
○ Consist of two key elements:
■ format_instructions
● An extra string that Langchain adds to
the end of a prompt to assist with
formatting.
■ parse() method:
● A method for using eval() internally to
parse the string reply to the exact
Python object you need.
LangChain
● Parsers Example
○ You need a datetime response from an LLM.
■ Two Main Issues:
● LLM always replies back with a string:
○ “2020-01-01”
● Could be formatted in many ways:
○ “Jan 1st, 2020”
■ Parsers use format_instructions to take
care of the first issue and eval() to take
care of the second issue.
LangChain
● Parsers Example
○ You need a datetime response from an LLM.
■ Using DatetimeOutputParser:
● Replies are actual datetime objects
after using parse()
○ You can also use AutoFix with OutputFixParser
to re-attempt the correct parsed output with
another LLM call!
LangChain
● Important Note:
○ We are going to try our best to deliberately
induce an error in an output.
○ Models are actually quite good at following
instructions, which means you may find
yourself not being able to reproduce the error
in the future, keep this in mind if you are
trying to code along with us, you may get the
correct output at the start!
Parsing Outputs
Part Three
LangChain
● Serialization
○ You may find yourself wanting to save, share,
or load prompt objects.
○ Langchain allows you to easily save Prompt
templates as JSON files to read or share.
○ Let’s explore this further with some
examples!
Models IO Exercise
LangChain