-
Notifications
You must be signed in to change notification settings - Fork 429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Strict Ackermannization tactic (bv_ac_eager) for QF_UFBV #5657
base: master
Are you sure you want to change the base?
Conversation
Mathlib CI status (docs):
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a partial pass, so far I'm missing some documentation to explain what's what, have not attempted to grok the correctness of the code yet
example : ∀ (a b : BitVec 64), (a &&& b) + (a ^^^ b) = a ||| b := by | ||
intros | ||
bv_decide | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see an uninterpreted function here.
b59b96f
to
cc0e4df
Compare
a `BVTy` into an App. | ||
-/ | ||
def reifyAp (f : Expr) : OptionT MetaM (Function × Array Argument) := do | ||
let xs := f.getAppArgs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what if we are working on a function like
f : Nat -> Bool -> Bool
applied to something like f 1 b
. In this situation you would fully give up on reification even though it is very much possible to do this by using (f 1)
as the function instead. Do we want this? And if so, how far do we want to push it? In theory we could even support things like f : Nat -> Bool -> Nat -> Bool
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's a fair point, I do indeed bail out. I added a comment describing that it's currently very conservative, and that we can extend to handle a larger fragment.
return (mkAppN f args, g) -- NOTE: is there some way to use update to update this? | ||
|
||
|
||
partial def introAckForExpr (g : MVarId) (e : Expr) : AckM (Expr × MVarId) := do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Going directly under binders violates the invariant that we don't deal with loose bvars in MetaM
, all important functions like isDefEq
, whnf
etc. rely on the fact that you do not do this. Is there a particular reason that you break the invariant here.
Furthermore despite being some of the larger functions in this file these are one of the only ones without a doc string.
ffc62dc
to
1a6ae3c
Compare
We implement strict ackermannization, which is an algorithmic technique to convert QF_BV+UF into QF_BV. The implementation walks over the goals and the hypotheses. When it encounters a function application `(f x1 x2 ... xn)` where the type signature of `f` is `BitVec k1 -> BitVec k2 -> ... -> BitVec kn -> BitVec ko`, it creates a new variable `fAppX : BitVec k0`, and an equation `fAppX = f x1 ... xn`. Next, when it encounters another application of the same function `(f y1 y2 ... yn)`, it creates a new variable `fAppY : BitVec k0`, and another equation `fAppY = f y1 ... yn`. After the walk, we loop over all pairs of applications of the function `f` that we have abstracted. In this case, we have `fAppX` and `fAppY`. For these, we build a extentionality equation, which says that if ``` hAppXAppYExt: x1 = x2 -> y1 = y2 -> ... xn = yn -> fAppX = fAppY ``` Intuitively, this is the only fact that they theory UF can propagate, and we thus encode it directly as a SAT formula, by abstracting out the actual function application into a new free variable. This implementation creates the ackermann variables (`fAppX, fAppY`) first, and then in a subsequent phase, adds all the ackermann equations (`hAppXAppYExt`). This anticipates the next patches which will implement *incremental* ackermannization, where we will use the counterexample from the model to selectively add ackerman equations.
1a6ae3c
to
9b5633e
Compare
We implement strict Ackermannization,
which is an algorithmic technique to convert
QF_UFBV
intoQF_BV
.The implementation walks over the goals and the hypotheses. When it encounters a function application
(f x1 x2 ... xn)
where the type signature off
isBitVec k1 -> BitVec k2 -> ... -> BitVec kn -> BitVec ko
, it creates a new variablefAppX : BitVec k0
, and an equationfAppX = f x1 ... xn
. Next, when it encounters another application of the same function(f y1 y2 ... yn)
, it creates a new variablefAppY : BitVec k0
, and another equationfAppY = f y1 ... yn
.After the walk, we loop over all pairs of applications of the function
f
we have abstracted. In this case, we havefAppX
andfAppY
. For these, we build an extensionality equation, which says that ifIntuitively, this is the only fact that the theory UF can propagate, and we thus encode it directly as a SAT formula,
by abstracting out the actual function application into a new free variable.
This implementation creates the Ackermann variables (
fAppX, fAppY
) first and then, in a subsequent phase, adds all the Ackermann equations (hAppXAppYExt
). This anticipates the next patches, which will implement incremental Ackermannization, where we will use the counterexample from the model to selectively add Ackerman equations.