• March 5, 2020
ed vul

I am an Assistant Professor at the the UCSD Department of Psychology.

My lab [www.evullab.org] works at the intersection between the computational and algorithmic descriptions of human cognition, to reconcile models of human behavior as statistically optimal computations with the findings of cognitive psychology. Basically:

1. How can we approximate optimal statistical computations despite our limited cognitive resources? And

2. If we are close to statistically optimal, how would we allocate our limited resources? And what insights do we gain about when, and why, we fail in particular tasks?

Are you here about Voodoo Correlations?
Or about Crowd Within?
Or about Functional Adaptive Sequential Testing?