Optimizing for the Unoptimized

Optimizing for the Unoptimized

The quest to replicate human behavior is the holy grail of Artificial Intelligence (AI).

Decision making under uncertainty with scarce information is a human specialty. AI systems are being trained to think like humans and take decisions like them.

It brings us to an important question. How optimal are humans? How many of our decisions are actually the most thought out ones? How many of our decisions are actually optimized/optimal? We often take stupid, regrettable decisions when emotional and then try to retrospectively justify them (I took a lot of those!)

We take decisions out of spite, out of anger, sadness and many sub-optimal states. If we were to build an artificial agent that mimics us, should it be optimized to take the best decisions or to take random, rarely rational but mostly stupid decisions?

The agency to be aware of the best path forward but still choose a less optimal (aka stupid) decision seems to be a human specialty. How do we replicate the same in a machine?

Doing that assignment in time will fetch you ten additional marks and a grade bump. That's the best path. You know it. Everybody else knows it too. But you watch Netflix anyway. What should the machine do?

Subscribe to Srikar Kashyap

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe