< Terug naar vorige pagina

Publicatie

Lazy model expansion: Interleaving grounding with search

Tijdschriftbijdrage - Tijdschriftartikel

Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowl- edge representation languages like Answer Set Programming (ASP) and FO(·) and a CSP modeling language such as Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blow-up of the size of the theory caused by the grounding phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(·) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non-grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified. Experimental results illustrate the power and generality of this approach.
Tijdschrift: The Journal of Artificial Intelligence Research
ISSN: 1076-9757
Volume: 52
Pagina's: 235 - 286
Jaar van publicatie:2015
BOF-keylabel:ja
IOF-keylabel:ja
BOF-publication weight:1
CSS-citation score:1
Auteurs:International
Authors from:Private, Higher Education
Toegankelijkheid:Open