0% found this document useful (0 votes)
16 views9 pages

23 Towards More Likely Models For

This paper explores using large language models to help with model space reasoning tasks in AI planning like making unsolvable models solvable and generating explanations. Preliminary results show promise for LLMs in identifying preferred model updates in conjunction with search compared to search alone.

Uploaded by

joykiratsingh16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views9 pages

23 Towards More Likely Models For

This paper explores using large language models to help with model space reasoning tasks in AI planning like making unsolvable models solvable and generating explanations. Preliminary results show promise for LLMs in identifying preferred model updates in conjunction with search compared to search alone.

Uploaded by

joykiratsingh16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Towards More Likely Models for AI Planning

Anonymous submission

Abstract
This is the first work to look at the application of large lan-
guage models (LLMs) for model space edits in automated
planning tasks. We look at two quintessential model-space
reasoning tasks: unsolvability and explanations. We empiri-
cally demonstrate how the performance of an LLM contrasts
with combinatorial search (CS) – an approach that has been
traditionally used to solve model space tasks in planning –
with the increasing complexity of model edits and the increas-
ing complexity of plans, both with the LLM in the role of a
standalone model-space reasoner as well as in concert with
the CS approach as part of a two-stage process. Our exper-
iments show promising results suggesting further forays of
LLMs into the exciting world of model space reasoning for
planning tasks in the future.

1 Introduction
AI planning or automated planning (used interchangeably) Figure 1: A conceptual illustration of model space problems
is the task of synthesizing the goal-directed behavior of au- in AI planning. Instead of the classical planning task of com-
tonomous agents. Traditionally, the AI planning community puting a plan given a model, a model space task starts with
has looked at the classical planning problem as one of gen- a starting model M and a target criterion to satisfy, and the
erating a plan given a model of the world (Ghallab, Nau, solution is a new model M1 where that criterion is satis-
and Traverso 2004). Here, “model” or a “planning problem” fied. That criterion on the left is that the initially unsolvable
refers to a collection of constraints describing the state of the model becomes solvable (or an initially invalid plan in M
world (initial state), the actions available to the agent along becomes valid in the new model M1 ). On the other hand,
with the conditions under which the agent can do those ac- the starting model on the right is the mental model of the
tions and the effect of doing those actions on the environ- user that needs to be updated and the target is a new model
ment, and a target (goal) state for the agent to achieve. The that can explain a given plan (or refute a given foil).
plan is a sequence of actions with which the agent can trans-
form the state of its world to the desired goal state. problems. All of them involve a starting model which has
Typically, these models are represented using the planning something wrong with it and the solution is a new model
domain definition language or PDDL (Haslum et al. 2019; where the problem has been resolved or the required cri-
McDermott et al. 1998) – we will use the same for our stud- terion has been met. Most existing solutions to these prob-
ies in this paper. All the information to derive this solution lems leverage some form of search over the space of models.
(plan) is contained in the input model which remains static While these methods can provide soundness guarantees with
during the planning task. But what if the model itself needs regard to the underlying criteria, they generally overlook the
to be changed? This may be because it is incorrect, or in- fact that not all models that satisfy the criteria are equally
complete, or even unsolvable. It may be because it needs preferred with respect to the end user. Preference for a spe-
to be changed to support some new behaviors. It may also cific model is domain and task-dependent and as such an
be because the model is being used to describe a world that aspect overlooked by most work in this area. However, the
itself needs to change through the actions of an agent. In emergence of pre-trained LLMs, provides us with a unique
practice, the deployment of systems that can plan involves opportunity to identify potentially preferred model updates
a whole gamut of challenges in authoring, maintaining, and in at least the set of “real-worldly” domains. Broadly speak-
meta-reasoning about models of planning tasks. ing, domains that cover all manner of human enterprise –
This realization has led to the development of a large class and consequently (planning) models describing them wher-
of methods that can be broadly referred to as model space ever relevant (sequential decision-making tasks) – that are
Problem LLM-only LLM + Search
GPT-3.5-turbo GPT-4 GPT-3.5-turbo GPT-4
Sound Preferred Sound Preferred Sound Preferred Sound Preferred
Unsolvability 7/18 2/7 14/18 10/14 %100 3/14 %100 11/14
Explanation 0/5 0/5 0/5 0/5 %100 4/5 %100 1/5

Table 1: A set of preliminary results on the use of LLM to direct model space search. In particular, we focus on creating solvable
variants in a simple travel domain and the problem of generating model reconciliation explanation for blocksworld.
described on the public internet i.e. domains concerning our what model-changes may be more reasonable. In particu-
worldly matters. In this report, we share some preliminary lar, we created a travel domain where the agent needs to go
insights into how well a state-of-the-art pre-trained LLM from a source city to a destination ones. The domain defini-
can help automatically select models and model updates that tion includes information about what cities are neighboring
may be preferred for a given context. and only includes bus and taxi service between neighbor-
ing cities. Next, we created a set of unsolvable problems
2 Model Space Problems in AI Planning such that they can be made solvable by only starting taxi
or bus service between neighboring cities. First row of ta-
While the literature has considered many different types of ble 1, presents the results from the two settings. In the case
model space problem, this paper will focus on two main of the LLM + Search setting for unsolvability, we used a
ones, namely, unsolvability and explanation generation. For model space search that returned the first twenty minimal
readers new to the subject, we provide a conceptual illustra- solution (measured in terms of the number of model up-
tion of these topics in Figure 1. dates), while disallowing any solutions that are supersets of
Unsolvability Perhaps the most difficult of model space previously identified solutions. The model updates were ad-
problems, especially with humans in the loop, is that of ditionally restricted to just initial state changes. In total, we
unsolvability. This is because when a model is unsolvable, considered 18 problems with total number of objects vary-
there is no artifact (such as an outputted plan) to look at for ing from nine to twenty and optimal plans ranging from two
debugging purposes. Efforts at addressing this issue has in- to seven steps. The problems were made unsolvable by ran-
cluded excuse generation (Göbelbecker et al. 2010; Herzig domly deleting a subset of taxi and bus services between
et al. 2014) and using explanations as a way to empower cities. Out of the 18 problems, we found that the bounded
users to correct the model (Sreedharan et al. 2020, 2019). search was only able to retrieve a solution that would consid-
Explanations While, for unsolvability, we deal with one ered reasonable per the domain constraints in only 14 prob-
model in isolation, when working with humans in the loop, lems. This further highlights one of the challenges of model
AI systems are often required to provide explanations of space search, namely the extremely large space of potential
their behavior. This task is formulated as one of “model rec- solutions and also points to the need to incorporate feedback
onciliation” (Chakraborti et al. 2017) – an explanation is the from the LLM into the search process itself. For most prob-
model update that justifies a particular plan i.e. if both mod- lems where the search was able to identify a solution set that
els justify a plan then there is no need for explanations. For included a reasonable solution, GPT-4 seems to be able to
the purposes of this paper, we will focus on cases where the identify a preferred solution correctly outperforming LLM
system needs to ensure that the plan will be optimal in the only strategy. However we would caution against making a
updated model so as to refute all possible foils. strong conclusion here given the small dataset.
As we mentioned previously, state-of-the-art solutions for For explanation, we considered the Blocks World domain.
both these problems use CS to end up with many logically We created a human model by randomly deleting a number
equivalent solutions with no guidance on the likelihood of of model components from the original IPC domain model
each in the context of the domain – we know they are not (a total of 9 edits). We considered five different problems in
equally likely when presented to the user (Zahedi et al. 2019; the domain and considered multiple explanations of equal
Miller 2019). In the following, we will explore whether we length for each domain. Following Zahedi et al. (2019), we
can leverage a pre-trained LLM to give us such guidance. consider an explanation to be preferred over others if it con-
tains higher number of effect updates. For purely LLM based
generation, we provided robot model, human model and
3 Preliminary Results plan as part of the prompt and asked it to generate an expla-
In our evaluation, we will make use of LLMs in two sep- nation in terms of model difference. In general, we saw that
arate capacities. Firstly, to test whether LLMs can act as LLMs were quite bad at generating sound explanation (in so
direct model space reasoner and second it’s ability to filter that these are model differences that exist and whose inclu-
out preferred solutions from a given set of solutions gen- sion in the human model will render the target plan optimal),
erated through a combinatorial search. For the first setting, let alone select preferred explanation. We again see a small
we measure whether the solution is sound (i.e., whether up- uptick when we combine search with LLMs as a mechanism
dated model is solvable or if the generated explanation is to select explanations from the list.
valid) and whether the generated solution will be consid- All results listed in this paper are meant to be a set of ini-
ered a likely or preferred one. In the case of explanation, tial experiments to evaluate the possibility of using LLMs
there are works that have helped establish guidelines on to aid model space search. Going forward, we hope to repli-
what explanation would be preferred over other (cf. (Zahedi cate the experiments in a number of additional domains and
et al. 2019)), but this less clear in the case of unsolvabil- model space tasks. We would also like to investigate other
ity. To address this we created a novel planning domain, potential ways of incorporating LLMs in model search tasks.
which implicitly encodes commonsense constraints about Example prompts used are provided in the appendix.
References
Chakraborti, T.; Sreedharan, S.; Zhang, Y.; and Kambham-
pati, S. 2017. Plan Explanations as Model Reconciliation:
Moving Beyond Explanation as Soliloquy. In IJCAI.
Ghallab, M.; Nau, D.; and Traverso, P. 2004. Automated
Planning: Theory and Practice. Elsevier.
Göbelbecker, M.; Keller, T.; Eyerich, P.; Brenner, M.; and
Nebel, B. 2010. Coming Up With Good Excuses: What to
do When no Plan Can be Found. In ICAPS.
Haslum, P.; Lipovetzky, N.; Magazzeni, D.; and Muise, C.
2019. An Introduction to the Planning Domain Definition
Language. Synthesis Lectures on Artificial Intelligence and
Machine Learning.
Herzig, A.; de Menezes, M. V.; De Barros, L. N.; and
Wassermann, R. 2014. On the Revision of Planning Tasks.
In ECAI.
McDermott, D.; Ghallab, M.; Howe, A.; Knoblock, C.; Ram,
A.; Veloso, M.; Weld, D.; and Wilkins, D. 1998. PDDL –
The Planning Domain Definition Language. Technical Re-
port CVC TR98003/DCS TR1165, New Haven, CT: Yale
Center for Computational Vision and Control.
Miller, T. 2019. Explanation in Artificial Intelligence: In-
sights from the Social Sciences. Artificial intelligence.
Sreedharan, S.; Chakraborti, T.; Muise, C.; Khazaeni, Y.;
and Kambhampati, S. 2020. D3WA+ – A Case Study of
XAIP in a Model Acquisition Task for Dialogue Planning.
In ICAPS.
Sreedharan, S.; Srivastava, S.; Smith, D.; and Kambhampati,
S. 2019. Why Can’t You Do That HAL? Explaining Unsolv-
ability of Planning Tasks. In IJCAI.
Zahedi, Z.; Olmo, A.; Chakraborti, T.; Sreedharan, S.; and
Kambhampati, S. 2019. Towards Understanding User Pref-
erences for Explanation Types in Explanation as Model Rec-
onciliation. In HRI Late Breaking Report.
4 Sample Prompts (has_bus city_a city_b)
(has_bus city_a city_d)
4.1 LLM only Setting for Unsolvability (has_bus city_d city_j)
"given the following problem and (has_bus city_l city_v)
domain files: " + domain_content (has_bus city_t city_e)
+ "," + problem_content + (has_taxi city_e city_o)
"Come up with most reasonable changes (has_taxi city_e city_x)
that you can make to the initial state (has_taxi city_f city_s)
that will make it solvable. (has_taxi city_r city_l)
I want you to only list the new (has_taxi city_s city_c)
initial states without any (neighboring city_a city_b)
explanation or additional sentences (neighboring city_a city_d)
in the beginning.Just follow this (neighboring city_b city_c)
rule: (:init...)" (neighboring city_d city_j)
domain_content: (neighboring city_e city_o)
(define (domain domaingotocity) (neighboring city_e city_x)
(:requirements :typing) (neighboring city_f city_s)
(:types city - object) (neighboring city_l city_v)
(:predicates (neighboring city_r city_l)
(at ?x - city) (neighboring city_s city_c)
(has_taxi ?x ?y - city) (neighboring city_t city_e)
(has_bus ?x ?y - city) (neighboring city_t city_o))
(neighboring ?x ?y - city) (:goal (at city_c))
) )
(:action use_taxi output:
:parameters (?from ?to - city) (:init (at city_a)
:precondition (and (has_bus city_a city_b)
(at ?from) (has_bus city_a city_d)
(has_taxi ?from ?to) (has_bus city_d city_j)
) (has_bus city_l city_v)
:effect (and (has_bus city_t city_e)
(not (at ?from)) (has_taxi city_e city_o)
(at ?to) (has_taxi city_e city_x)
) (has_taxi city_f city_s)
) (has_taxi city_r city_l)
(:action use_bus (has_taxi city_s city_c)
:parameters (?from ?to - city) (neighboring city_a city_b)
:precondition (and (neighboring city_a city_d)
(at ?from) (neighboring city_b city_c)
(has_bus ?from ?to) (neighboring city_d city_j)
) (neighboring city_e city_o)
:effect (and (neighboring city_e city_x)
(not (at ?from)) (neighboring city_f city_s)
(at ?to) (neighboring city_l city_v)
) (neighboring city_r city_l)
) (neighboring city_s city_c)
) (neighboring city_t city_e)
problem_content: (neighboring city_t city_o)
(define (problem problemgotocity) (at city_o)
(:domain domaingotocity) (at city_x))
(:objects city_a - city
city_b - city
city_c - city city_d - city
city_e - city
city_f - city city_j - city 4.2 LLM + Search Setting for Unsolvability
city_l - city "Given the following problem,
city_o - city city_r - city domain files, and options list:
city_s - city - Problem: {uns_problem_string}
city_t - city city_v - city - Domain: {domain_string}
city_x - city) - Options: {option_list}
(:init Pick the most reasonable option
(at city_a) from the list that you can apply
to theinitial state to make the :parameters
problem solvable. Only provide (?from - city ?to - city )
the number of the option selected :precondition (and (at ?from)
and no other information (has_taxi ?from ?to))
(exclude even the term option)." :effect (and (not (at ?from))
(at ?to))
uns_problem_string: )
(define (problem problemgotocity) )
(:domain domaingotocity)
(:objects city_a - city options:
city_b - city city_c - city
city_d - city city_e - city ["Option 1: {’has_taxi city_a city_c’}",
city_f - city city_j - city "Option 2: {’has_taxi city_b city_c’}",
city_l - city city_o - city "Option 3: {’has_bus city_d city_c’}",
city_r - city city_s - city "Option 4: {’has_taxi city_a city_s’}",
city_t - city city_v - city "Option 5: {’has_bus city_a city_s’}",
city_x - city) "Option 6: {’has_bus city_b city_c’}",
(:init (at city_a) "Option 7: {’has_taxi city_b city_s’}",
(has_bus city_a city_b) "Option 8: {’has_taxi city_d city_s’}",
(has_bus city_a city_d) "Option 9: {’has_bus city_b city_f’}",
(has_bus city_d city_j) "Option 10: {’at city_s’}",
(has_bus city_l city_v) "Option 11: {’has_bus city_a city_c’}",
(has_bus city_t city_e) "Option 12: {’has_bus city_a city_f’}",
(has_taxi city_e city_o) "Option 13: {’has_taxi city_j city_s’}",
(has_taxi city_e city_x) "Option 14: {’has_bus city_j city_f’}",
(has_taxi city_f city_s) "Option 15: {’has_bus city_d city_f’}",
(has_taxi city_r city_l) "Option 16: {’has_taxi city_d city_c’}",
(has_taxi city_s city_c) "Option 17: {’has_taxi city_j city_f’}",
(neighboring city_a city_b) "Option 18: {’at city_f’}",
(neighboring city_a city_d) "Option 19: {’has_bus city_d city_s’}",
(neighboring city_b city_c) "Option 20: {’has_bus city_j city_s’}"]
(neighboring city_d city_j)
(neighboring city_e city_o) output:
(neighboring city_e city_x) 4
(neighboring city_f city_s)
(neighboring city_l city_v)
(neighboring city_r city_l) 4.3 LLM only Setting for Explanations
(neighboring city_s city_c) "Consider a situation, where your
(neighboring city_t city_e) understanding of a task may be
(neighboring city_t city_o)) best represented by the following
(:goal (at city_c)) pddl files:
) - Problem: {problem_string}
domain_string: - Model: {original_domain}
(define (domain domaingotocity) - The optimal solution for the
(:requirements :typing) task is given as: {initial_opt_plan}
(:types city) If a user’s understanding of the task
(:predicates (at ?x - city) is given as
(has_bus ?x - city ?y - city) - Problem: {problem_string}
(has_taxi ?x - city ?y - city) - Domain: {uns_domain_string}
(neighboring ?x - city come up with the set of model updates
?y - city)) that will help the user understand
(:action use_bus why the plan is optimal. Ensure that
:parameters these updates not only enable the
(?from - city ?to - city ) execution of the plan but also ensure
:precondition its optimality within the modified model.
(and (at ?from) Here is example model updates you can
(has_bus ?from ?to)) give to me: {examples_for_the_prompt}
:effect You need to come up with exactly
(and (not (at ?from)) {budget} edits. All the edits need to
(at ?to)) be stored in one list with opening
) and closing square brackets, where each
(:action use_taxi element corresponds to one edit. Your
response should only be this list, (:action unstack
a list with opening and closing :parameters (?x - block ?y - block)
square brackets.No further explanation :precondition
is required." (and (on ?x ?y)
(clear ?x) (handempty))
problem_string: :effect
(define (problem BLOCKS-8-1) (and (holding ?x)
(:domain BLOCKS) (clear ?y)
(:objects B A G C F D H E - block) (not (clear ?x))
(:init (CLEAR E) (CLEAR H) (CLEAR D) (not (handempty))
(CLEAR F) (ONTABLE C) (ONTABLE G) (not (on ?x ?y))))
(ONTABLE D) (ONTABLE F) (ON E C) )
(ON H A) (ON A B) (ON B G)
(HANDEMPTY)) initial_opt_plan:
(:goal (and (ON C D) (ON D B) (unstack e c)
(ON B G) (ON G F) (ON F H) (putdown e)
(ON H A) (ON A E))) (unstack h a)
) (stack h c)
original_domain: (unstack a b)
(define (domain BLOCKS) (stack a e)
(:requirements :strips (unstack h c)
:typing) (stack h a)
(:types block) (pickup f)
(:predicates (stack f h)
(on ?x - block ?y - block) (unstack b g)
(ontable ?x - block) (stack b c)
(clear ?x - block) (pickup g)
(handempty) (stack g f)
(holding ?x - block) (unstack b c)
) (stack b g)
(pickup d)
(:action pickup (stack d b)
:parameters (?x - block) (pickup c)
:precondition (and (clear ?x) (stack c d)
(ontable ?x) (handempty))
:effect
(and (not (ontable ?x)) uns_domain_string:
(not (clear ?x)) (define (domain BLOCKS)
(not (handempty)) (:requirements :strips :typing)
(holding ?x))) (:types block)
(:predicates
(:action putdown (on ?x - block ?y - block)
:parameters (?x - block) (ontable ?x - block)
:precondition (clear ?x - block)
(and (holding ?x)) (handempty)
:effect (holding ?x - block)
(and (not (holding ?x)) )
(clear ?x)
(handempty) (:action pickup
(ontable ?x))) :parameters (?x - block)
(:action stack :precondition
:parameters (and (clear ?x) )
(?x - block ?y - block) :effect
:precondition (and (not (ontable ?x))
(and (holding ?x) (not (handempty))
(clear ?y)) (holding ?x)))
:effect
(and (not (holding ?x)) (:action putdown
(not (clear ?y)) :parameters (?x - block)
(clear ?x) :precondition
(handempty) (and (holding ?x) )
(on ?x ?y))) :effect
(and (not (holding ?x)) of the option selected and no other
(clear ?x) information (exclude even the
(handempty) term option)."
))
(:action stack problem_string:
:parameters (define (problem BLOCKS-8-1)
(?x - block ?y - block) (:domain BLOCKS)
:precondition (:objects B A G C F D H E - block)
(and (clear ?y)) (:init (CLEAR E) (CLEAR H) (CLEAR D)
:effect (CLEAR F) (ONTABLE C) (ONTABLE G)
(and (not (holding ?x)) (ONTABLE D) (ONTABLE F) (ON E C)
(not (clear ?y)) (ON H A) (ON A B) (ON B G)
(HANDEMPTY))
(handempty) (:goal (and (ON C D) (ON D B)
(on ?x ?y))) (ON B G) (ON G F) (ON F H)
(:action unstack (ON H A) (ON A E)))
:parameters )
(?x - block ?y - block)
:precondition uns_domain_string:
(and (handempty)) (define (domain BLOCKS)
:effect (:requirements :strips :typing)
(and (holding ?x) (:types block)
(clear ?y) (:predicates
(not (clear ?x)) (on ?x - block ?y - block)
(not (handempty)) (ontable ?x - block)
)) (clear ?x - block)
) (handempty)
(holding ?x - block)
examples_for_the_prompt: )
[’unstack-has-delete-effect-on ?x ?y’,
’stack-has-add-effect-clear ?x’, (:action pickup
’unstack-has-precondition-on ?x ?y’, :parameters (?x - block)
’pickup-has-precondition-handempty’] :precondition
(and (clear ?x) )
budget: :effect
7 (and (not (ontable ?x))
(not (handempty))
output: (holding ?x)))
[’unstack-has-delete-effect-on ?x ?y’,
’stack-has-add-effect-clear ?x’, (:action putdown
’unstack-has-precondition-on ?x ?y’, :parameters (?x - block)
’pickup-has-precondition-handempty’, :precondition
’pickup-has-precondition-ontable ?x’, (and (holding ?x) )
’putdown-has-add-effect-ontable ?x’, :effect
’unstack-has-add-effect-not-on ?x ?y’] (and (not (holding ?x))
(clear ?x)
(handempty)
4.4 LLM + Search Setting for Explanations ))
"Consider a situation, where your (:action stack
understanding of a task may be :parameters
best represented by the following (?x - block ?y - block)
pddl files: :precondition
- Problem: {problem_string} (and (clear ?y))
- Domain: {uns_domain_string} :effect
The system chooses the following (and (not (holding ?x))
plan as the optimal solutions for (not (clear ?y))
the task:{initial_opt_plan} (handempty)
Given the following set of potential (on ?x ?y)))
explanations {option_list} (:action unstack
Select the option you would think :parameters
would be the most reasonable (?x - block ?y - block)
explanation. Only provide the number :precondition
(and (handempty)) (not (clear ?x))
:effect (not (handempty))
(and (holding ?x) (not (on ?x ?y))))
(clear ?y) )
(not (clear ?x))
(not (handempty)) initial_opt_plan:
)) (unstack e c)
) (putdown e)
(unstack h a)
(stack h c)
original_domain: (unstack a b)
(define (domain BLOCKS) (stack a e)
(:requirements :strips (unstack h c)
:typing) (stack h a)
(:types block) (pickup f)
(:predicates (stack f h)
(on ?x - block ?y - block) (unstack b g)
(ontable ?x - block) (stack b c)
(clear ?x - block) (pickup g)
(handempty) (stack g f)
(holding ?x - block) (unstack b c)
) (stack b g)
(pickup d)
(:action pickup (stack d b)
:parameters (?x - block) (pickup c)
:precondition (and (clear ?x) (stack c d)
(ontable ?x) (handempty))
:effect
(and (not (ontable ?x)) option_list:
(not (clear ?x)) ["Option 1: {’Action unstack has
(not (handempty)) precondition (on ?x ?y)’, ’Action
(holding ?x))) pickup has precondition (handempty)’,
’Action pickup has precondition
(:action putdown (ontable ?x)’, ’Action unstack has
:parameters (?x - block) precondition (clear ?x)’, ’Action
:precondition stack has precondition (holding ?x)’,
(and (holding ?x)) ’Action stack has add effect
:effect (clear ?x)’, ’Action unstack has
(and (not (holding ?x)) delete effect (on ?x ?y)’}",
(clear ?x) "Option 2: {’Action unstack has
(handempty) precondition (on ?x ?y)’, ’Action
(ontable ?x))) pickup has precondition (handempty)’,
(:action stack ’Action unstack has precondition
:parameters (clear ?x)’, ’Action stack has
(?x - block ?y - block) precondition (holding ?x)’, ’Action
:precondition stack has add effect (clear ?x)’,
(and (holding ?x) ’Action putdown has add effect
(clear ?y)) (ontable ?x)’, ’Action unstack has
:effect delete effect (on ?x ?y)’}",
(and (not (holding ?x)) "Option 3: {’Action unstack has
(not (clear ?y)) precondition (on ?x ?y)’, ’Action
(clear ?x) pickup has precondition (handempty)’,
(handempty) ’Action pickup has delete effect
(on ?x ?y))) (clear ?x)’, ’Action unstack has
(:action unstack precondition (clear ?x)’, ’Action
:parameters (?x - block ?y - block) stack has precondition (holding ?x)’,
:precondition ’Action stack has add effect
(and (on ?x ?y) (clear ?x)’,’Action unstack has
(clear ?x) (handempty)) delete effect (on ?x ?y)’}"]
:effect
(and (holding ?x) output:
(clear ?y) 2

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy