If “efficiency”, since always, had its limitations (being, by itself, a little bit confused when asked to answer the question: “who and under what conditions should invoke it as an argument for one course of action or another?”), neither are the “ethical” discussions much more enlightening when summoned to judge A.I. (what is to be the “right” framework: virtue-, values-, or consequentialist/utilitarian- ethics?). And the particular instances further complicate the choice of the ethical mindset for posing problems related to: the limits of private space and public surveillance; the manipulation of behaviours, be they consumption or voting; the lack of transparency of the algorithms used; the (in)voluntary biases incorporated into automated decision-making systems; the human-machine interactivity, from medical care to online promiscuity; the relationship between automation and responsibility; the treatment of technology as some sort of a “moral agent”; the emergence of the “superintelligent singularity”, deciding to annihilate us, fed up with the barbarity and/or banality of our species.