[PDF][PDF] Seeking Practical CDCL Insights from Theoretical SAT Benchmarks.

J Elffers, J Giráldez-Cru, S Gocht, J Nordström, L Simon - IJCAI, 2018 - birs.ca
Instrumented Solver: Glucose [AS09]/MiniSat [ES04] 1: procedure solve (F) 2: while v← next
variable decision do 3: decide on v with chosen phase 4: do unit (fact) propagation 5: if
conflict (there is falsified clause) then 6: if no decided variable then return UNSAT 7: learn
clause from conflict 8: backjump (undo bad decisions) 9: if time to prune clause database
then 10: k← difference to new database size 11: remove k clauses with worst clause
assessment 12: if time for restart then 13: undo all decisions 14: return SAT

[PDF][PDF] Seeking Practical CDCL Insights from Theoretical SAT Benchmarks

J Nordström - 2016 - csc.kth.se
Can we explain when CDCL does well and when formulas are hard? Run experiments and
draw interesting conclusions? Theory approach: CDCL hardness related to complexity
measures? Some work in [JMNZ12], but generated more questions than answers Applied
approach: Vary CDCL settings on industrial benchmarks Some work in [KSM11, SM11], but
diversity and sparsity of industrial benchmarks makes it hard to draw clear conclusions Why
not combine the two approaches? Generate scalable & easy versions of theoretical …
以上显示的是最相近的搜索结果。 查看全部搜索结果