With the major advances in artificial intelligence, humanity is moving closer to handing over complex decision-making to machines, without humans in the direct loop. While the respective rationales for the handover to automation tend to be for positive ends, the outcomes have not always been so. Automated safety systems have resulted in catastrophic all-fatal crashes of jetliners. Mis-codings have resulted in lost rockets and satellite. As automated decision-making is embedded in more systems, the risks accrue. This work explores the processes of designing systems for automated decision-making, testing regimens, and potential outcomes. This also asks the question, What decisions are humans willing to hand over to machines, and why? This is a relevant question especially given that humans take all consequences, come what may.