Lab Session1 25oct2024
Lab Session1 25oct2024
• Useful for Complex Tasks: Effective in scenarios requiring logical analysis, math, or
structured reasoning.
Prompt engineering: Zero shot inference
• Prompt engineering: Providing a prompt that can get the best response
• Zero shot inference: Provide a well crafted prompt to the foundational LLM, use the response
from the model as the output. No additional training of the model is required
Prompt engineering: One shot inference
Classify the following text: Classify the following text:
Great to see a successful Great to see a successful
Chandrayan Mission! Chandrayan Mission!
Sentiment: Positive Sentiment: Positive
Classify the following text: Model Classify the following text:
The movie is boring The movie is boring
Sentiment: Sentiment: Negative
• Prompt engineering: Providing a prompt that can get the best response
• One shot inference: Provide a well crafted prompt to the foundational LLM along
with one example, use the response from the model as the output. No
additional training of the model is required
Prompt engineering: Few shot inference
Classify the following text: Classify the following text:
Great to see a successful Great to see a successful
Chandrayan Mission! Chandrayan Mission!
Sentiment: Positive Sentiment: Positive
Classify the following text: Model Classify the following text:
The movie is boring The movie is boring
Sentiment: Negative Sentiment: Negative
Classify the following text: Classify the following text:
What an awesome match! What an awesome match!
Sentiment: Sentiment:
• Prompt engineering: Providing a prompt that can get the best response
• Few shot inference: Provide a well crafted prompt to the foundational LLM along
with a few examples, use the response from the model as the output. No
additional training of the model is required
Few shot learning
• The success of LLMs comes from their large size and ability to store “knowledge”
within the model parameter, which is learned during model training.
Ref: https://www.pinecone.io/learn/series/langchain/langchain-prompt-templates/
Advanced Prompt Engineering
• Chain of Thoughts
• Tree of Thoughts
• Graph of Thoughts
• ReAct Agents
Chain of Thoughts – see cot.py in my code
Limitations of CoT Reasoning
• Chain-of-thought reasoning is intended to
combat reasoning errors.
2. An output parser framework that stops the agent from generating text once it has written a valid
action, executes that action in the environment, and returns the observation (appends it to the
text generated so far and prompts the LLM with that).
• For each sub system, we will walk through a sample code and let you code
• Please complete the dataset cleaning, database creation, basic LLM prompting to
generate SQL today. Please create a demo video and place it in the shared folder.
• You want to analyze those constituencies that are low margin wins more in depth by looking at
assembly level granules, gender ratio and/or any other variable and identify them.
• Analyze all lost constituencies, determine if any of them are winnable this time. List them.
SQL Output
LLM Execution Processor Visualizer
Runtime LLM
Text Answer,
NL Query Structured Results Code for Final
Query Visualization Output
Orchestrator
• Data Preparation
• Develop Prompts
• Evaluate
Data Preparation
• The files are in .xls format, open them in MS Excel, save them as .xlsx. This will enable us to avoid
installing xlrd package and we can directly use pandas to read this file.
• For the excel no 34, remove unwanted rows, ensure that the excel is a plain table.
• You will find that some fields are empty in the votes_secured. Fill them with 0.
• Write a function to save the csv data as a sqlite3 db. Name the db as “elections”
and table name as “elections_2019”
Write Code: LLM Client
• Run the LLM server using LMStudio as discussed during the earlier hands on
• Test the code by sending some test prompts and checking the results.
Build a Chatbot using streamlit
• Streamlit is a library to create UI on browser using Python.
• Integrate the LLM with streamlit front end using get_completion() function.
• Streamlit has necessary functions for charting and visualization so that one can build LLM driven
dashboards quickly
• Review the front end code: my_chatbot.py, you can add necessary code for rich visualization like bar
charts etc.
Ref: https://github.com/streamlit/llm-examples/blob/main/Chatbot.py
Ref: https://docs.streamlit.io/develop/tutorials/llms/build-conversational-apps
Write Code: Build DB execution runtime
• Write a module that takes the query as input, execute the query on the given
database. You can chose SQL or MongoDB.
• Make sure that the LLM generated code doesn’t cause any harmful side effects,
such as deleting or corrupting any database record
Write Code: Prompting for SQL
• Review the questions and pick those that can be answered from the database
• Write prompts that take NL Query and Return SQL from the LLM
• Input should be through Chat GUI and SQL should be displayed in the GUI
OPTIONAL: Use ReAct framework
• ReAct is about using external tool to perform actions
• Can you build a tool that can automatically execute SQL code, get the results, run
it again through LLM?
Write Code: Develop the orchestrator
• Now that all modules are coded and tested separately, build the
orchestrator that runs the workflow through all these modules.
Generate NL questions
• Using an LLM, auto generate about 25 questions
• You are required to develop and modify your prompts such that you get accurate
SQL code out of the LLM