trustworthy AI
The why of AI. Artificial intelligence systems are continuing to grow in popularity for pulling useful predictions from complex data. However, as the decisions of these AI systems increase in their impact, we must decide just how much to trust those decisions. We study why models make their decisions and try to develop research that enables user trust of those decisions. The complexity of AI models make it non-trivial to understand exactly why a model may make mistakes, or even more subtly why a model might make the right decision for the wrong reasons. Reliable & Trustworthy AI research aims to build the necessary understanding and tools to provide solid answers to the rationale behind model predictions; when they are right, and more importantly, when they are wrong.