Sunday, 1 September 2019

The End of Morality


​The source of morality

 

We treat adults as morally responsible for their actions because they can (incorrectly) report the causes of their actions. For example "I did it because I thought that x", rather than "because I was genetically programmed to learn x, and was shown that y". Note that these psychological reports are first person and active, giving them the status of reasons, as opposed to simply causes.

This faculty of giving first-person reasons basically permits us to be held responsible, and it is something deeply programmed into us. These reasons characteristically involve few factors, which are psychological, additive, and rely on heuristics. This is in contrast to causes, which may involve a large number of nonlinearly interacting factors. Causes could be quite approximate and framed in terms of human-level events, but could still be valid where reasons are not, for example if they include the factors that determine action the most.

  • Now, we should be able to train a computer to come up with reasons for the things it does. Imagine a computer saying "I moved that folder, because I thought you had misfiled it." If it usually responded in this way, I expect it would soon be regarded as responsible.
  • Conversely if we educated people from childhood to report accurate, third person passive accounts for their actions, we might not hold them responsible. Imagine if we all spoke like this: "I made a cup of tea, because my brain received a thirst signal, and given my current brain context, it triggered my motor system."

If we consistently understood our own actions in terms of impersonal, deterministic events, perhaps we would not experience responsibility for our actions. Perhaps we would also not feel like blaming people. An element of this already occurs in people with hysterical, compulsive and other disorders. In some cases, for example,  failures of the will get attributed to brains, not persons. In such cases, the phenomenology is blurred, but may involve being "unable to will" certain actions or thoughts. Such non-personal accounts are set to increase as neurology improves. But the point here is that our sense of psychological reasons for our actions (that permits moral responsibility) is not set in stone.

So, in the future, maybe we will have a mix of free-willed computers and non-free-willed people!

I want to stay neutral as to whether we really have a 'truly free' will. But instead what I am concerned with here, is how we view ourselves as humans. Our subjective experiences regarding volition and self-control are quite variable. And that leads me to think that, over coming centuries, we might progressively come to experience ourselves as less 'truly free' than we do now. Now is the era of free will.


Moral responsibility is useful (for a while)

 

There's something deeply worrying about classifying freedom and responsibility based on the way an individual reports the causes of their actions. What if you feel you're free, but I don't feel I am? Would we become different classes of moral citizen? One way out is if everyone were to become a compatibilist, holding on to an outdated sense of responsibility despite knowing it relies on heterogeneous and fallacious reasons. But a better long-term solution might be to drop moral responsibility and blame altogether.

We will probably come to view our own actions as being determined by countless other agents past and present. (Determined does not mean predictable - because complex physical systems just aren't predictable. But they may be determined, in the sense that the future is determined autonomously by the system's state.) Once we are at this point, where we understand ourselves as determined, then phrases like "He intended to..." will be interpreted causally - just as we today would interpret "The computer intended to...". In other words, intentions serve as shorthand denoting the state of a decision-making system.

Can I still be responsible for my deterministic actions, by appealing to their unpredictability? This would not work, because responsibility depends on actions having reasons. Indeed our actions are mainly judged by our reasons for acting. Injurious actions that are unintentional are not judged as bad, unless they are indirectly caused by a lapse of judgement (which is itself a psychological reason).   So, any concept of free will in which free actions arise (deterministically or not) but are not supported by reasons, would not generate responsibility in its usual sense. For morality, you need a kind of freedom that can be supported by reasons. Whether or not the actions are deterministic and unpredictable, is irrelevant. All that is needed is a rational (reason-based) outlook on action-producing processes.


So if many people started thinking this way, over hundreds of years, society may drop the notion of responsibility and blame. This isn't a matter of us changing from free agents to not free. Rather, it is a matter of representing freedom as a closed system, vs. representing it as an open system. Society may identify freedom as individuals' deterministic reasoning, that is flexible, goal-directed, and responsive to input. 

Replacing responsibility with a control system


How could such a society function? Responsibility plays a key societal role, so what would fill that role?  Punishment could be replaced with rehabilitation. There would be little place for retribution, or vengance. Deterrence, for example the threat of punishment, might still have a place. However at some point, it may be replaced by education, more stringent conditioning, or psychotherapy. Or it could be replaced by a stronger form of brainwashing. Altering brain function -- memories, moods, desires -- by deliberate reprogramming may become the norm. It is a natural extension of schooling or drinking coffee, and a more humane way of changing behaviour than reproof. Your child does something wrong, and a computer can tweak their thoughts and behaviour. Crazy as it may seem, the precedents are already set: crude psychosurgery a century ago sometimes achieved what it intended, without any sound brain-mind model.


When we develop the ability to manipulate thoughts, many things will have to change. Will responsibility itself have to change? Or does the ability to manipulate thoughts just move some responsibility to the manipulator? Most likely, the brain tweaking would be determined by an algorithm, at least in part. If someone does something bad, after being tweaked, who is responsible? The causal chain leading to the action may grow very long , for example if an tweaking algorithm was programmed based on brain data. Or rather, a causal network would be "responsible", with a diverse range of people and machines all taking joint credit for decisions about tweaks. The individual responsibility is replaced with a network of responsibility. This is nothing new, as countries and companies function in this way, and we are reasonably happy attributing joint responsibility -- even though we might still blame the leader.


The dark age 

 

However, before this point, as neuroscience develops, there is likely to be a period when the operation of the brain is understood well, but thoughts cannot yet be manipulated. This 'dark age' is when morality becomes problematic. It is needed -- yet it cannot exist. We will see our actions as determined, and predictable, but they are not determined intentionally by a rational system. Probably, the brain's operation will be understood well some time before we learn how to fully control it. It doesn't matter too much whether the brain is understood directly by human minds, or indirectly through powerful computer models. But once we start to experience our inner workings as being predictable, the limbo era begins.  This limbo era may be centuries long, and will require destruction of  laws, creation of regulative and predictive intelligences, cognitive prosthetics, and development of technologies for fusing minds, or thought transfer. All these might be precursors to full control. And finally we will learn how to tweak the brain, with scalpel precision.

Only after this is achieved can morality truly die.  After that, maybe we will reminisce on the era of morality, an era that any intelligent life must pass through. It was an era when intelligence was insufficient to create models of its own operation, and was thus unable to harness its itself, unable to avail itself of its full power. Just as we look back on the savage hunter, they might look back on our era of blame and punishment.