0% found this document useful (0 votes)
16 views43 pages

01 Models IO

Uploaded by

labbu.dwivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views43 pages

01 Models IO

Uploaded by

labbu.dwivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Models

Input and Output


LangChain

● Models Inputs and Outputs


○ In this section we’ll begin our journey of
learning LangChain by understanding how to
create basic input prompt requests for models
and how to manage their outputs.
○ At its core Langchain needs to be able to
send text to LLMs and also receive and work
with their outputs. This section of the course
focuses on the basic functionalities and
syntax of doing this with Langchain.
LangChain

● Models Inputs and Outputs


○ Using Langchain for Model IO will later allow
us to build chains, but also give us more
flexibility in switching LLM providers in the
future, since the syntax is standardized
across LLMs and only the parameters or
arguments provided change.
○ Langchain supports all major LLMs (OpenAI,
Azure, Anthropic, Google Cloud, etc.)
LangChain

● Models IO Section Overview


○ LLMs
○ Prompt Templates
○ Prompts and Model Exercise
○ Prompts and Model Exercise - Solution
○ Few Shot Prompt Templates
○ Parsing Outputs
○ Serialization - Saving and Loading Prompts
○ Models IO Exercise Project
○ Models IO Exercise Project - Solution
LangChain

● Models Inputs and Outputs


○ You should note that just Model IO is not the
main value proposition of Langchain and
during the start of this section you may find
yourself wondering the use cases for using
Langchain for Model IO rather than the
original API.
○ If you do find yourself skeptical, hang on until
we reach the “parsing output” examples, and
then you’ll begin to see hints of utility.
LangChain

● Models Inputs and Outputs


○ Once we combine the ideas we learn about
here with Data Connection and Chains, you’ll
have a very clear idea of why a developer
may choose to use Langchain rather than
building our their own solution.
○ It will save us a lot of time and give us
greater flexibility, but first we need to
understand the basics of interacting with
Models for input and output with Langchain!
Let’s get started!
LLMs
LangChain

● Large Language Models


○ There are two main types of APIs in
Langchain:
■ LLM
● Text Completion Model: Returns the
most likely text to continue
■ Chat
● Converses with back and forth
messages, can also have a “system”
prompt.
LangChain

● Large Language Models


○ LangChain supports many different services:
■ https://python.langchain.com/docs/
modules/model_io/models/llms/
○ In this course, we will focus on the OpenAI
API, since it is the most popular, and due to
upcoming changes post GPT-4 wide release,
we will also focus on the Chat Completion API.
○ Note that later when we learn about chains
the API calls will look very similar.
LangChain

● Large Language Models


○ Make sure you’ve created an OpenAI API
before continuing, or if you’ve decided to use
a different model or service, check out the
API connection calls in the documentation
previously linked (we will also explore this in
the lecture).
○ Let’s get started with some basic LLM calls
using LangChain!
LLMs
Chat Models
LangChain

● Chat Models
○ Chat Models have a series of messages, just
like a chat text thread, except one side of the
conversation is an AI LLM.
○ Langchain creates 3 schema objects for this:
■ SystemMessage
● General system tone or personality
■ HumanMessage
● Human request or reply
■ AIMessage: AI’s reply (more on this later!)
LangChain

● Chat Models
LangChain

● Chat Models
LangChain

● Chat Models
○ In this lecture we’ll explore how to expand
LLM integrations to chat models with
Langchain and how to pass in extra
parameters or arguments (for example
increasing temperature).
○ We will also see a great caching feature to
save replies in memory for common requests!
Prompt Templates
LangChain

● Prompt Templates
○ Templates allow us to easily configure and
modify our input prompts to LLM calls.
○ Templates offer a more systematic approach
to passing in variables to prompts for models,
instead of using f-string literals or .format()
calls, the PromptTemplate converts these
into function parameter names that we can
pass in.
LangChain

● Prompt Templates
○ Let’s explore how to build prompt templates
for both LLM Text Completion and prompt
templates for Chat Models.
Prompts and Models
Exercise
LangChain

● Even with just the basics of Model IO and


PromptTemplates, you can start to create
interesting applications.
● Let’s test your understanding of the Langchain
syntax with an exercise.
○ NOTE:
■ Still feeling a bit too new with
Langchain syntax? No worries! Skip to
the next lecture and treat the
exercise as a code-along example app
project.
LangChain

● The exercise notebook is:


○ 00-Models-IO
■ 02-Prompts-and-Model-Exercise.ipynb
Prompts and Models
Exercise Solution
Few Shot
Prompt Template
LangChain

● Few Shot Prompt Templates


○ Sometimes it’s easier to give the LLM a few
examples of input/output pairs before sending
your main request.
○ This allows the LLM to “learn” the pattern you
are looking for and may lead to better results.
○ It should be noted that there is currently no
consensus on best practices, but LangChain
recommends building a history of Human and
AI message inputs.
LangChain

● Few Shot Prompt Templates


○ We use the chat history to create example
input/output pairs to help the model
understand formatting.
Parsing Outputs
Part One
LangChain

● Parsing Outputs
○ Often when connecting LLM output you need
it in a particular format, for example, you
want a python datetime object, or a JSON
object.
○ LangChain comes with Parse utilities allowing
you to easily convert outputs into precise
data types or even your own custom class
instances with Pydantic.
LangChain

● Parsers
○ Consist of two key elements:
■ format_instructions
● An extra string that Langchain adds to
the end of a prompt to assist with
formatting.
■ parse() method:
● A method for using eval() internally to
parse the string reply to the exact
Python object you need.
LangChain

● Parsers Example
○ You need a datetime response from an LLM.
■ Two Main Issues:
● LLM always replies back with a string:
○ “2020-01-01”
● Could be formatted in many ways:
○ “Jan 1st, 2020”
■ Parsers use format_instructions to take
care of the first issue and eval() to take
care of the second issue.
LangChain

● Parsers Example
○ You need a datetime response from an LLM.
■ Using DatetimeOutputParser:
● Replies are actual datetime objects
after using parse()
○ You can also use AutoFix with OutputFixParser
to re-attempt the correct parsed output with
another LLM call!
LangChain

● Let’s explore parsers further!


Parsing Outputs
Part Two
LangChain

● We’ve seen the basics on how to use parsers,


but what happens when that still isn’t enough to
format your output?
○ There are two ways to solve this:
■ System Prompt
● Have a strong system prompt to
combine with your format instructions.
■ OutputFixingParser
● Using a chain, re-send your original
reply to an LLM to try to fix it!
LangChain

● Important Note:
○ We are going to try our best to deliberately
induce an error in an output.
○ Models are actually quite good at following
instructions, which means you may find
yourself not being able to reproduce the error
in the future, keep this in mind if you are
trying to code along with us, you may get the
correct output at the start!
Parsing Outputs
Part Three
LangChain

● Using the Pydantic library for type validation,


you use Langchain’s PydanticOutputParser to
directly attempt to convert LLM replies to your
own custom Python objects (as long as you built
them with Pydantic).
○ Note, this requires you to have some Pydantic
knowledge and pip install the pydantic library.
● Let’s explore a simple example!
Serialization
LangChain

● Serialization
○ You may find yourself wanting to save, share,
or load prompt objects.
○ Langchain allows you to easily save Prompt
templates as JSON files to read or share.
○ Let’s explore this further with some
examples!
Models IO Exercise
LangChain

● We’ve learned a lot about Model IO with


Langchain!
● Let’s test your knowledge by having you build
out a few methods inside of a class to create a
simple historical quiz bot.
● Important Note:
○ Feel free to skip to the next lecture and
treat this as a code-along project if you
prefer!
Models IO
Exercise Solution

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy