José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA
description
Transcript of José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA
José Júlio AlferesLuís Moniz Pereira
Centro de Inteligência Artificial - CENTRIAUniversidade Nova de Lisboa, Portugal
Pierangelo Dell’Acqua
Dept. of Science and Technology - ITNLinköping University, Sweden
Contribution
The paper presents a compilation of programs formalizing update plus preference reasoning into standard generalized logic programs, and shows the correctness of the transformation.
The compilation is based on: - a transformation into normal programs of sequences of general logic programs updates. - a transformation of logic programs with preferences.
Update reasoning
Updates model dynamically evolving worlds.
Updates differ from revisions which are about an incomplete static world model.
Knowledge, whether complete or incomplete, can be updated to reflect world change.
New knowledge may contradict and override older one. New models may also be created by removing such contradictions.
Preference reasoning
Preferences are employed with incomplete knowledge when several models are possible
Preferences act by choosing some of the possible models
They do this via a partial order among rules. Rules will only fire if they are not defeated by more preferred rules
Preference and updates combined
Despite their differences preferences and updates display similarities.
Both can be seen as wiping out rules: in preferences the less preferred rules, so as to remove models which are undesired. in updates the older rules, inclusively for obtaining models in otherwise inconsistent theories.
This view helps put them together into a single uniform framework. In this framework, preferences can be updated.
LP Framework
Atomic formulae:A objective atom
not A default atom
Formulae:
every Li is an objective or default atom
generalized ruleL0 L1 Ln
LP Framework
Let N={ n1,…, nk } be a set of constants containing a unique name for each generalized rule.
Let P be a set of generalized rules and R a set of priority rules. Then (P,R) is a prioritized logic program.
Z is a literal nr<nu or not nr<nu
priority rule
Z L1 Lnnr<nu means that rule r is preferred to rule u
Dynamic Prioritized Programs
Let S={1,…,s,…} be a set of natural numbers. We call the elements iS states.
Let (Pi,Ri) be a prioritized logic program for every iS, then {(Pi,Ri) : iS} is a dynamic prioritized program.
Intuitively, the meaning of such a sequence results from updating (P1, R1) with the rules from (P2, R2), and then updating the result with … the rules from (Pn, Rn)
Example
Suppose a scenario where Stefano watches programs on football, tennis, or the news.
(1) In the initial situation, being a typical italian, Stefano prefers both football and tennis to the news and, in case of international competitions, he prefers tennis over football.
In this situation, Stefano has two alternative TV programmes equally preferable: football and tennis.
f not t, not n (r1)
t not f, not n (r2)n not f, not t (r3)
n1<n3
n2<n3
n2<n1 usx<y x<z, z<y
P1 R1
Example
(2) Next, suppose that a US-open tennis competition takes place:
Now, Stefano's favourite programme is tennis.
us (r4)P2 R2
(3) Finally, suppose that Stefano's preferences change and he becomes interested in international news. Then, in case of breaking news he will prefer news over both football and tennis.
bn (r5)P3
not (n1<n3) bn
not (n2<n3) bnn3<n1 bnn3<n2 bn
R3
Preferred Stable Models
Let P = {(Pi,Ri) : iS} be a dynamic prioritized program,
Q = { PiRi : iS }, PR = i (PiRi) , and M an
interpretation of P.
Def. Default and Rejected rules
Default(PR,M) = {not A : (ABody) in PR and M | body }
Reject(s,M,Q) = { r PiRi : r’ PjRj, head(r)=not head(r’), i<js and M |= body(r’) }
Preferred Stable Models
Def. Unsupported and Unprefered rules
Unsup(PR,M) = {r PR : M |= head(r) and M | body-(r)}
Unpref(PR,M) is the least set including Unsup(PR, M) and every rule r such that:
r’ (PR – Unpref(PR, M)) :
M |= r’ < r,M |= body+(r’) and
[not head(r’)body-(r) or(not head(r) body-(r’) and M |=
body(r))]
Preferred Stable Models
Def. Preferred stable models
Let s be a state, P = {(Pi,Ri) : iS} a dynamic prioritized program, and M a stable model of P. M is a preferred stable model of P at state s iff
M = least( [X - Unpref(X, M)] Default(PR, M) )
where:PR = is (PiRi)
Q = { PiRi : iS } X = PR - Reject(s,M,Q)
Transformation
(s,P) = DLP(s,P) rQ (r) DA SPO
Let s be a state and P = {(Pi,Ri) : iS} a dynamic
prioritized program. Let Q = is Pi
Def. (s,P) transformation
DLP(s,P) Transformation
The DLP(s,P) transformation models the dynamic aspects of update reasoning:
DLP(s,P) = RP UR IR DR RR CS
DLP(s,P) Transformation
(RP) Rewritten program rules
with AFi A1,…,An,A-n+1,…,A-
m
with A-Fi A1,…,An,A-
n+1,…,A-m
A A1,…,An,not An+1,…,not Am
not A A1,…,An,not An+1,…,not Am
Replace any rule in Fi=PiRi of the form:
and of the form:
(UR) Update rules
Ai AFi
A-i A-
Fi
(IR) Inheritance rules
Ai Ai-1, not A-Fi
A-i A-
i-1 , not AFi
(DR) Default rules
A0-
DLP(s,P) Transformation
(RR) Rejection rules
reject(nr) AFt
reject(nr) A-Ft
for any rule r in Fi=PiRi and for all i < t s
(CS) Current state rules
A As
A- A-
s
false A, A-
DLP(s,P) Transformation
(r) Transformation
The (r) transformation models preference reasoning.
If r = not A A1,…,An,not An+1,…,not Am
then ř = not Ă Ă1,…, Ăn,not Ăn+1,…,not Ăm
Let [.] be a function from literals to objective atoms:[A] = A
[not A] = A-
Notation
(r) Transformation
(r) rules: consists of the following collection of rules, for
Abody+(r), not Cbody-(r) and any rule uQ:
ap(nr), not reject(nr)
ok(nr), [body(r)], [body-(ř)]
ok(nr), A-, Ă-
ok(nr), C, Č
[head(ř)]ap(nr)
bl(nr)
bl(nr)
Suppose that Q = { r1,…, rk }
(r) Transformation
ry(nr,nr1),…,ry(nr,nrk)not (nu< nr)(nu< nr), ap(nu)
(nu< nr), bl(nu)ko(nu)
reject(nu)
not ok(nr), not reject(nr)
[head(r)], C
ok(nr) ry(nr,nu)
ry(nr,nu)
ry(nr,nu)ry(nr,nu)ry(nr,nu)
false
ko(nr)
Transformation
(DA) Default atom rules
Ă- not Ă
(SPO) Strict partial order
false nr< nr
false nr1< nr2, nr2< nr3, (nr1< nr3)-
Properties of (r)
Let s be a state, P = {(Pi,Ri) : iS} a dynamic prioritized
program and M a stable model of P. Let Q = is Pi .
Then, the following properties hold:
- rQ if reject(nr)M, then ok(nr)M
- rQ if reject(nr)M, then ( ap(nr)M iff bl(nr)M )
- rQ if ko(nr)M iff rUnsup(Q,M)
- rQ if reject(nr)M, then ( ko(nr) implies bl(nr) )
Properties of (s,P)
Let s be a state and P a dynamic prioritized program. An
interpretation M a stable model of (s,P) iff M, restricted to the
language of P, is a preferred stable model of P at state s.
Thm. Correctness of (s,P)
Conclusions
We presented a compilation into normal programs of logic programs subject to updates and preferences combined under the stable model semantics.
The preference part of our transformation is modular or incremental wrt. the update part of the transformation.
The size of the transformed program (s,P) in the worst case is quadratic on the size of the original dynamic prioritized program P.
An implementation of the transformation is available at:http://centria.di.fct.unl.pt/~jja/updates/
Future work
Garbage collection of dynamic logic programs.
Combining updates and preferences under the well-founded semantics.
Exploring some application areas: * abductive reasoning with updatable preferences. * dynamically reconfigurable web-sites which adapt to updatable user profiles.