Open In App

Prepositional Logic Inferences

Improve
Improve
Like Article
Like
Save
Share
Report

Prerequisite: Wumpus World in Artificial Intelligence

The agent uses logical inference to identify which squares are safe but makes plans via A* search. We’ll show you how to construct plans using logical inference in this part. The basic concept is straightforward:

  1. Write a phrase that comprises
    • Init^{0}  , a set of claims about the beginning state;
    • \text { Transition }^{1}, \ldots, \text { Transition }^{t}  , the successor-state axioms for all conceivable actions at each time up to some maximum time t  ; and
    • the assertion that the goal is attained at time t  \text { HaveGold }^{t} \wedge \text { Climbed Out }^{t}
  2. To a SAT solver, present the entire statement. The aim is achieved if the solution develops a suitable model; if the phrase is unsatisfiable, the planning problem is impossible.
  3. Assuming you’ve found a model, extract the variables that indicate actions and set them to true. They form a strategy for achieving the objectives.

 Prepositional Logic Inferences

SATPLAN is a propositional planning technique. It enacts the above-mentioned general concept, but with a twist. The algorithm tries each potential number of steps t, up to some maximum probable plan length Tmax, because the agent does not know how many steps it will take to reach the goal. It is guaranteed to locate the shortest plan if one exists in this manner. This strategy cannot be employed in a partially observable environment because of the way SATPLAN looks for a solution; SATPLAN would simply set the unobservable variables to the values required to construct a solution.

The creation of the knowledge base is a crucial step in using SATPLAN. On the surface, the wumpus world axioms appear to be sufficient for steps 1(a) and 1(b) above. The requirements for entailment (as assessed by ASK  ) and those for satisfiability, on the other hand, are vastly different. Consider the agent’s initial location of [1,1]  , and say the agent’s modest goal is to be in [2,1]   at time 1. L_{1,1}^{0}   is in the beginning knowledge base, and L_{2,1}^{1}   is the goal. We can establish L_{2,1}^{1}   using ASK   if Forward0 is affirmed, but we can’t prove L12,1 if, say, Shoot0 is asserted instead. SATPLAN is now searching for the plan [Forward0]; so far, so good. SATPLAN, however, also detects the plan [Shoot0]. How is it possible? To find out, we examine SATPLAN’s model, which includes the assignment L_{2,1}^{0}  , which states that the agent can be at [2,1] at time 1 by being there at time 0 and shooting. “Didn’t we mention the agent is in [1, 1]   at time 0?” one could wonder. Yes, but we didn’t inform the agent that it couldn’t be in two places at the same time! L_{2,1}^{0}   is unknown for entailment and hence cannot be used in a proof; on the other hand, L_{2,1}^{0}   is unknown for satisfiability and thus can be set to whatever value helps to make the aim true. As a result, SATPLAN is a useful debugging tool for knowledge bases because it identifies knowledge gaps. We may fix the knowledge base in this scenario by stating that the agent is in exactly one location at each time step, using a set of phrases similar to those used to state the existence of precisely one wumpus. For all locations other than [1, 1]  , we can claim \neg L_{x, y}^{0}  ; the successor-state axiom for location takes care of further time steps. The same adjustments can be used to ensure that the agent only has one orientation.

SATPLAN, on the other hand, has a few more surprises in store. The first is that it looks for models that perform impossible activities, such as shooting an arrow without it. To understand why we need to take a closer look at what the successor-state axioms imply about actions with unmet preconditions. The axioms are true in predicting that nothing will happen if such an action is carried out, but they are not correct in stating that the action cannot be carried out! To avoid producing plans that contain illegal actions, we must include precondition axioms that state that for an action to occur, the preconditions must be met. For example, we need to say that \text { Shoot }^{t} \Rightarrow \text { HaveArrow }^{t}   for each time t  .

This assures that if a plan picks the Shoot action at any point in time, the agent must have an arrow on the screen at that time. The formulation of plans with several simultaneous actions is SATPLAN second surprise. It may, for example, generate a model in which both Forward0 and Shoot0 are true, which is not permitted. We introduce action exclusion axioms to solve this problem: for each pair of actions A_{i}^{t}   and A_{j}^{t}  , we add the axiom \neg A_{i}^{t} \vee \neg A_{j}^{t}  .

It should be noted that stepping forward and firing at the same time is not difficult, however shooting and grabbing at the same time, for example, is pretty impractical. We can allow for plans with many simultaneous actions by imposing action exclusion axioms only on pairs of activities that truly interfere with each other—and because SATPLAN determines the shortest legal plan, we can be confident that it will make use of this capacity.

To summarize, SATPLAN looks for models in a sentence that contains the beginning state, objective, successor-state axioms, precondition axioms, and action exclusion axioms. It may be demonstrated that this set of axioms is sufficient, in the sense that no more false “solutions” exist. Any model that fulfils the propositional statement is a viable solution to the problem at hand. The strategy is fairly realistic thanks to modern SAT-solving technologies. A DPLL-style solver, for example, has no trouble providing the 11-step solution for the wumpus world instance.

The declarative approach to agent construction discussed in this section is that the agent works by asserting statements in the knowledge base and doing logical inference. Some flaws in this approach are disguised in terms like “for each time t” and “for each square [x, y]” These phrases must be implemented by code that generates instances of the general sentence schema automatically for insertion into the knowledge base for any practical agent. We might require a 100 X 100   board and 1000 time steps for a wumpus world of reasonable size—one akin to a tiny computer game—resulting in knowledge bases with tens or hundreds of millions of phrases. This is not only impractical, but it also highlights a larger issue: we know something about the wumpus world—namely, that the “physics” works the same way across all squares and time steps—something we can’t describe explicitly in propositional logic. We need a more expressive language to handle this challenge, one that allows words like “for each time t” and “for each square [x,y]” to be written in a natural way. A wumpus world of any size and duration can be described in around 10 sentences rather than ten million or ten trillion in first-order logic.



Last Updated : 06 Feb, 2024
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads