Skip to main content

Preference-Aware Decision Making


Humans excel at autonomy in their cognitive flexibility to make satisficing decisions in uncertain and dynamic environments. However, when designing autonomous systems,  existing formal methods with Boolean truth of logical “correctness” fundamentally limit machines to achieve human-like intelligent planning that trades off between task completion, correctness, and preferences over alternative outcomes.  FINS research on this matter focuses on developing new formal specification and methods for preference-based planning in uncertain environments. There are three key questions:  How can a machine rigorously specify human’s preferences and temporal goals in formal logic? Given a task specific in this language, how can a machine synthesize policies to achieve  preferred satisfaction of the mission in an uncertain environment?   How can researchers enable an agent to adapt its preference-based policy while learning the human’s preference and the environment it interacts with?