Sunday, 1 September 2019

The End of Morality


​The source of morality

 

We treat adults as morally responsible for their actions because they can (incorrectly) report the causes of their actions. For example "I did it because I thought that x", rather than "because I was genetically programmed to learn x, and was shown that y". Note that these psychological reports are first person and active, giving them the status of reasons, as opposed to simply causes.

This faculty of giving first-person reasons basically permits us to be held responsible, and it is something deeply programmed into us. These reasons characteristically involve few factors, which are psychological, additive, and rely on heuristics. This is in contrast to causes, which may involve a large number of nonlinearly interacting factors. Causes could be quite approximate and framed in terms of human-level events, but could still be valid where reasons are not, for example if they include the factors that determine action the most.

  • Now, we should be able to train a computer to come up with reasons for the things it does. Imagine a computer saying "I moved that folder, because I thought you had misfiled it." If it usually responded in this way, I expect it would soon be regarded as responsible.
  • Conversely if we educated people from childhood to report accurate, third person passive accounts for their actions, we might not hold them responsible. Imagine if we all spoke like this: "I made a cup of tea, because my brain received a thirst signal, and given my current brain context, it triggered my motor system."

If we consistently understood our own actions in terms of impersonal, deterministic events, perhaps we would not experience responsibility for our actions. Perhaps we would also not feel like blaming people. An element of this already occurs in people with hysterical, compulsive and other disorders. In some cases, for example,  failures of the will get attributed to brains, not persons. In such cases, the phenomenology is blurred, but may involve being "unable to will" certain actions or thoughts. Such non-personal accounts are set to increase as neurology improves. But the point here is that our sense of psychological reasons for our actions (that permits moral responsibility) is not set in stone.

So, in the future, maybe we will have a mix of free-willed computers and non-free-willed people!

I want to stay neutral as to whether we really have a 'truly free' will. But instead what I am concerned with here, is how we view ourselves as humans. Our subjective experiences regarding volition and self-control are quite variable. And that leads me to think that, over coming centuries, we might progressively come to experience ourselves as less 'truly free' than we do now. Now is the era of free will.


Moral responsibility is useful (for a while)

 

There's something deeply worrying about classifying freedom and responsibility based on the way an individual reports the causes of their actions. What if you feel you're free, but I don't feel I am? Would we become different classes of moral citizen? One way out is if everyone were to become a compatibilist, holding on to an outdated sense of responsibility despite knowing it relies on heterogeneous and fallacious reasons. But a better long-term solution might be to drop moral responsibility and blame altogether.

We will probably come to view our own actions as being determined by countless other agents past and present. (Determined does not mean predictable - because complex physical systems just aren't predictable. But they may be determined, in the sense that the future is determined autonomously by the system's state.) Once we are at this point, where we understand ourselves as determined, then phrases like "He intended to..." will be interpreted causally - just as we today would interpret "The computer intended to...". In other words, intentions serve as shorthand denoting the state of a decision-making system.

Can I still be responsible for my deterministic actions, by appealing to their unpredictability? This would not work, because responsibility depends on actions having reasons. Indeed our actions are mainly judged by our reasons for acting. Injurious actions that are unintentional are not judged as bad, unless they are indirectly caused by a lapse of judgement (which is itself a psychological reason).   So, any concept of free will in which free actions arise (deterministically or not) but are not supported by reasons, would not generate responsibility in its usual sense. For morality, you need a kind of freedom that can be supported by reasons. Whether or not the actions are deterministic and unpredictable, is irrelevant. All that is needed is a rational (reason-based) outlook on action-producing processes.


So if many people started thinking this way, over hundreds of years, society may drop the notion of responsibility and blame. This isn't a matter of us changing from free agents to not free. Rather, it is a matter of representing freedom as a closed system, vs. representing it as an open system. Society may identify freedom as individuals' deterministic reasoning, that is flexible, goal-directed, and responsive to input. 

Replacing responsibility with a control system


How could such a society function? Responsibility plays a key societal role, so what would fill that role?  Punishment could be replaced with rehabilitation. There would be little place for retribution, or vengance. Deterrence, for example the threat of punishment, might still have a place. However at some point, it may be replaced by education, more stringent conditioning, or psychotherapy. Or it could be replaced by a stronger form of brainwashing. Altering brain function -- memories, moods, desires -- by deliberate reprogramming may become the norm. It is a natural extension of schooling or drinking coffee, and a more humane way of changing behaviour than reproof. Your child does something wrong, and a computer can tweak their thoughts and behaviour. Crazy as it may seem, the precedents are already set: crude psychosurgery a century ago sometimes achieved what it intended, without any sound brain-mind model.


When we develop the ability to manipulate thoughts, many things will have to change. Will responsibility itself have to change? Or does the ability to manipulate thoughts just move some responsibility to the manipulator? Most likely, the brain tweaking would be determined by an algorithm, at least in part. If someone does something bad, after being tweaked, who is responsible? The causal chain leading to the action may grow very long , for example if an tweaking algorithm was programmed based on brain data. Or rather, a causal network would be "responsible", with a diverse range of people and machines all taking joint credit for decisions about tweaks. The individual responsibility is replaced with a network of responsibility. This is nothing new, as countries and companies function in this way, and we are reasonably happy attributing joint responsibility -- even though we might still blame the leader.


The dark age 

 

However, before this point, as neuroscience develops, there is likely to be a period when the operation of the brain is understood well, but thoughts cannot yet be manipulated. This 'dark age' is when morality becomes problematic. It is needed -- yet it cannot exist. We will see our actions as determined, and predictable, but they are not determined intentionally by a rational system. Probably, the brain's operation will be understood well some time before we learn how to fully control it. It doesn't matter too much whether the brain is understood directly by human minds, or indirectly through powerful computer models. But once we start to experience our inner workings as being predictable, the limbo era begins.  This limbo era may be centuries long, and will require destruction of  laws, creation of regulative and predictive intelligences, cognitive prosthetics, and development of technologies for fusing minds, or thought transfer. All these might be precursors to full control. And finally we will learn how to tweak the brain, with scalpel precision.

Only after this is achieved can morality truly die.  After that, maybe we will reminisce on the era of morality, an era that any intelligent life must pass through. It was an era when intelligence was insufficient to create models of its own operation, and was thus unable to harness its itself, unable to avail itself of its full power. Just as we look back on the savage hunter, they might look back on our era of blame and punishment.

Friday, 20 July 2018

Sunday, 22 July 2012

Human Order: A Philosophical Synthesis

Philosophers over the years have posited a number of different human faculties that guide human action: e.g. reason, passion and will. The goal of this paper is to take ideas from the history of philosophy to put some of these elements into a hierarchy such that higher ones typically master lower elements. In particular, a six-fold categorization of these faculties is outlined, with the most general in a mastery relation over the more specific. The resulting ordering is a simplified diagram of the human psyche for the purposes of planning and executing action. It is argued that a single basic diagram can accommodate a number of historical philosophical, religious, political, sociological and economic ideas, including definitions of morality and value.

Link to New Version (PDF, 1Mb), substantially enhanced (22nd July 2012). Comments most welcome!!!

Wednesday, 7 September 2011

On Being Inconsistent (On Being Just the Right Inconsistency)
Doing is quite different to preaching. I will argue that, in most situations, it is simply wrong to act according our beliefs about how everybody ought to behave. How I should act is intrinsically different to how everyone should act. I am making a case for hypocrisy.



It is helpful to consider the different roles of rulers and subjects. A ruler is able to make laws, whereas the subject must obey them. Ruling and obedience involve different modes of thought. There are different starting points, different palettes of actions, and different valuation systems - in short, ruling and being a subject are entirely different modes of thought. 

To be a subject of rule is to be the subjective I, a subjective being - unlike being a ruler. Rulers must take the third-person stance, and to an extent, deny their own subjectivity. 

It is fascinating that each  human mind has the capacity to switch between these two modes of thought.  We can think things through from both an impersonal, bird's-eye perspective, or from the egocentric standpoint, where the primary role of thought is to turn perception into action. Many conflicts, both internal and external, may have arisen from the switch.

Whenever one distinguishes two psychological processes, one must be clear on the two things: first, the start and endpoint of each process must be clarified - at what point do the inputs diverge, and at what point to the outputs converge again; and second, the extent to which the processes overlap or have blurred bounds. In postulating we have two thought systems, we must say that they do not occur in parallel but in series, that they both access semantic facts about the world, and that they are both constrained by certain tacit assumptions we globally make. These similarities can be demonstrated by considering the streams of thought that may come into awareness as we deliberate over some mundane activity.

Although the faculty of abstract (as opposed to practical) thought is a general one, the ability to apply abstractions to practice is indeed heterogeneous. In the study of behavioural inhibition (or equivalently the phenomena termed executive function, or cognitive control), the variety in health is great, and pathology yields even further diversity. The difficulty of uniting the abstract with the practical, is one that characterises the history of western law and morality. Later I will argue that it is necessary for structured society.


Though I have been demonstrating that hypocrisy arises naturally, I have not yet shown that is in fact a good thing, or a fortiori the right way to be. So many great ideas of how we should act fall back to theories of the other. Christianity often justifies behaviour with 'as you would wish others to do unto yourself'. To assume we can impute concepts to other minds is a large step, so Kantian ethics circumvents the existential problem by the notion of universality: my ability to wish something to become a universal law. However the issue at stake is the same one, relying on the notion of homogeneity of minds. This notion is coming to be regarded with suspicion, particularly in light of present understanding of cognitive psychology, psychopathology and neuropsychology: minds vary their internal structure, processing and capability, and more importantly, are contingent on physical state. [I will skip the theoretical problems of determinism in cognition here, though see article "neuroscience is to ethics".]
Virtue ethics often surmount the difficulty at the expense of parsimony. Stipulating that the best action should satisfy a set of criteria is useful, and this in itself justifies the criteria.

But here I am suggesting that neither approach does justice to the real issue, which is that the human mind wishes different things in different contexts, and that definitions of the best action must be sensitive to the context of thought. A poignant thought here, is that to think "Everybody should pay taxes" may easily be thought simultaneously as "I do not want to pay taxes". Other corollaries include "I want to want to pay taxes, but I don't want to pay taxes". To what extent are such pairs of sentiments 'inconsistent'? The human mind remembers a vast number of facts, that could be expressed as propositions. It so happens that they are seldom expressed as propositions. If all remembered facts were tokenised into a 'semantic web', some simple literalisation and application rules could be used to generate thousands of contradictions could be generated. But so long as they remain unverbalised, no contradiction is apparent - yet we acknowledge the latent contradiction as an inconsistency. 

A simple psychological example occurs in mental maps, where we may recall at one moment that it is 15 miles from Fareham to Portsmouth, and at another moment that it is 15 miles to Southampton in the opposite direction, and at a third moment, believe it is 20 miles from Portsmouth to Southampton. Only when simultaneously bring all three facts to consciousness, and use some rules of application, do we declare a contradiction in our semantic web. The law of contradiction seems to instruct us, that one of our beliefs is false, and we generalise this to 'ought' statements. If everyone ought to phi, then I should phi. 

Certain things only work in an ensemble
A major criticism of universal-based and other-centred ethical accounts is that, even with the best will in the world, acts designated "good" may end up doing overall harm, unless everybody else in the world also applies these ethics. Skeptics from Malthus to Hume recognised that without some impositon from above, most systems fail even in theory. 

[Description of the other in terms of microeconomic profit - and the first person. ]
[Action and logical thought have different neural and psychological underpinnings.]
[Pure vs practical reason/wisdom.]
[Altruism's instability in game theory (Fehr & Fischbacher Nature 2003)]

Being hypocritical is a way of teaching people, when the teacher understands but is not capable of the good behaviour. The insight here is twofold:
1) firstly, that there is variety in the abilities of people to follow morals, and
2) secondly, that this ability may be absent even in those who possess the ability to understand and conceive of what is good.
In this way, the hypocrite whose argument is sound, even though he may sin, does good to society. He can be qualified to teach, even if he is incapable of practicing, being good. 

Indignation is the usual response to somebody who accuses other people of something that he also does. It is natural to feel cheated, unjustly accused, or chafed by such events. Biblical examples again abound, from casting the first stone, to logs in eyes. Is this visceral response the right one? Not always, I believe. There are sure occasions where the hypocrite knows he does wrong - be it rarely or often - but yet criticises others. His reasons for criticising may vary, from genuine despair at his own failings, to an insecure attempt to belittle others. If the intention is uncertain, or if it is apparently beneficient, then what reason have we to feel wronged? However if it the intention is selfish and inconsiderate, then feeling wronged appears more reasonable. On the other hand, there are occasions where hypocrites do not even know that they themselves are guilty of the same misdeeds. On these cases should we be indignant or hurt? It is clear to me that this for of hypocrisy is abundant; when the hypocrite is confronted, how often have we heard him cry "Surely not, not I?!" In this situation, it must be beneficial for the hypocrite to be made to understand his own failings, but why should there be negative emotions? In social contexts, punitive negativity is generally retributional, but when the if hypocrite's issue is purely conceptual, it seems the equivalent of shouting at a child for not knowing something that he has never been taught. In these situations, mutual education can occur without negativity, and moreover, when an educator is guilty, or when both parties are guilty.

Behavioural inconsistency as difference in expectation of internal and external rewards
Inconsistency may or may not be psychologically healthy. If it is unhealthy, it is because people feel bad about saying one thing and doing another. I have argued (effectively in reverse) that such bad feelings provide evidence that inconsistency should not be treated as a bad thing. But in fact what is at stake is much larger here: is it healthy for us to be obliged abide by global rules, when the net yield is worse for me as an individual? Two examples are theft and excess fuel consumption. In both cases, for society to function, the individual must forgo immediate reward. In the case of theft, the global rule (common law) renders it economically beneficial for me to not steal - in the longer term I will be caught and imprisoned. There is no question of inconsistency: I believe theft is wrong, and I do not steal. However when the global rule does not exert sufficient economic pressure, in the case of saving the environment, inconsistency arises. I believe we need to consume less, yet I consume excess. 

The very presence of inconsistency presses the government into making stronger rules. If everyone had the capacity to repress inconsistency - that is, to always act as they believed one ought to - then what would drive legislation? As a race, we have a natural variety of abilities. I will not make any case here to suggest that intelligence is correlated with behaving more morally. However, I would like to put it in this way: conforming to one's own standards is not always a natural thing. And it is this feature of humanity that drives all social structures, including power, economic and legal. These vast and pervasive structures allow the individual to remain an individual: that is, they allow us to make decisions locally, without understanding the whole structure of civilisation, and still act for the benefit of humankind. These structures make it economical for me to emit 'good' actions. People who are not sufficiently competent to understand the global ramifications of their choices, are now able to choose for themselves.
Even within this totalitarian world view, local freedom can be achieved.

[Freud is rarely incorporated into ethical theories. I can see why, but at the same time, I think that ethics of repression need to be laid to rest once and for all.  ]

[example
I may believe that charity is wrong, but I still donate.
I may believe that taxes should be high, but I don't want to be taxed. ]

One interesting consequence of this opinion is that, I now bear the burden of explaining why hypocrisy has, throughout the ages, been considered a sin. If inconsistency is natural and the right way to be, then not only must we revise the conception of hypocrisy, but also explain why it was for so long considered a bad thing.

Tuesday, 11 January 2011

Involution

Mel + Sanj (Jan 2011)

We started with a confibulous alphabet
and drank too many words.
My Great Great Grandfather
painting frescoes with an electric toothbrush
is not futile.
He watched his son grow up
and immature by flint and fire
eating microwaved beards.
His offspring, bald and rich
unplugged the candle
(save the ozone!)
leaving her children invisible
their complatitude waxed
into today's watchoholics.
It is not always what goes after it;
it's how it relates to what goes before.
Mulled Poetry
by Emily, Mel, Sanjay, Steve and others

A poem is like a late night snog
A kite is like a low maintenance pet
Everyone keeps walking in 
Through the window
Why can't they use the door?
Like a Cook lost underground
Garotted by a purple scarf
All the brothers and sisters congregate
The blind leading the blind
Full of starfish nobody wants to buy
It really doesn't have to rhyme.
Requiem to Woolworths 
by Mel, Sanjay, and Steve, New Year's Eve

It's true to some you're known as Woolies,
 And you give to me the warm and fuzzies.
No time to pick and mix your fate;
 For future kids it's now to late -

If this be paris, they'd demonstrate
 To remonstrate your curt demise;
To Turk's delight you're the sultanate,
 Cheerfully cheap we all surmise.

O Architecture of my youth
 To whom I lost many a tooth,
For plastic toys you are divine;
 You would've survived if you sold wine!

Tuesday, 23 November 2010

for RA

Vacantly, insipidly, I gaze
 past daffodil and rye into a sky
jigsawed in cloud o'er rippling tepid haze -
 when louder from the breeze there grows a sigh
Of distant speed - I know that sound - my spine
 twists with unrest, as racing she appears,
The glint from west along electric line -
 And lo another! from the east he nears.
Before I know, from growl to roar and hiss
 To ard'rous clatter of their fateful meeting
Thrust into, 'gainst, and past, I see them kiss,
 Once done, receding, fading, my heart beating.
  Alone once more, blown hanging dust above,
  Perhaps I dreamed that fast trains cannot love.

Tuesday, 9 November 2010

Dan has added a new contribution under 'Music'

Wednesday, 20 October 2010

Sanjay has added an article on Red Chilli.

Sunday, 25 July 2010

Don't Mention The War!

In 'The Germans', an episode of 70s British sitcom 'Fawlty Towers' about a hotel with pretensions to grandeur in southwest England, Basil Fawlty, the hotel proprietor, famously advises his staff not to mention WW2 to the hotel's German guests. He then proceeds never to stops mentioning the war himself, with sometimes hilarious Freudian slips and then increasingly eccentric behaviour, until eventually Fawlty is consigned to hospital. This article, about climate scepticism, will follow the same plan, except perhaps for the hospital visit. Fortunately, Fawlty didn't mention once the German philosopher Friedrich Nietzsche. I'll try not to mention him either.

One of the discouraging aspects of the climate science debate is the 'us and them' aspect. This manifests itself in various forms, from refusals to give information to individuals with clearly staked positions, conducting campaigns to make their point in vitriolic language. Such language makes it difficult to put across complex issues and is certainly at odds with academic practice. Those with a close knowledge of climate science may feel perturbed and angry by those who they feel misrepresent scientific knowledge, attack the institutions of science and distort and obfuscate the understanding of good policy. They feel they have an agenda, they are fighting dirty. The more angry one side is with the other, the more one side accuses the other of malpractice, the more fragmented the debate and the less easy it is to see the light for the heat. The problem is that complex matters can be easily distorted, and therefore it requires responsible authority to articulate them

We need, also to be open-minded. Humans are susceptible to confirmation biases where they see what is wanted or what supports pre-existing beliefs. The more the other appears different, the worse this bias will be.


We need in fact much better communication and understanding, rather than anything that promotes an 'us and them' attitude to the scientists and climate sceptics. We need this for multiple reasons: good manners, persuasiveness, and the genuine quest for knowledge and insight through detailed cross examination. In short scientists should 'hug a sceptic' and vice versa. The last thing we need is military analogies...

And yet exactly this situation can be illuminated by the situation America and Britain faces in Afghanistan today. America faces an asymmetric enemy in the Taleban, and, even more so, which is not the same, with Al Qaida. As is being slowly understood, to attack unjustly can often turn the population against you. The military has to deal with the difficult problem of building trust with people who are mostly friendly but could be your mortal enemies. Assuming that all are enemies leads to the whole population turning against you; assuming all are friends could be naive and potentially fatal.

People need a similar approach to climate sceptics: in attack, assume all are friendly, in defence assume all are hostile. Or, if you can work in teams let one assume the sceptic is friendly, the other he is hostile _ a 'good cop bad cop' approach. The rest of the article is devoted to those sceptics who really are an enemy.

I recently picked up 'don't think of an elephant' by George Lakatos, The point of the title and the book is by mentioning something you evoke it, whether you wish to or not, which I suppose is the point of 'don't mention the war' in the Germans episode.

Nietzsche of course was probably aware of this. He once said to engage in fighting is to stoop to the level of the person you are fighting with. The dispute between scientists and sceptic can appear like a mixed up bout of boxing-chess where one player is trying to play chess and the other is boxing.

In climate science the problem is that the two sides are playing different games. A scientist doing and communicating science is trying to play chess, whereas the sceptic in trying to influence the media debate is boxing. Not only do we not really have a good game of chess or bout of boxing, someone might get hurt. Furthermore the scientist, who may move in different circles cannot defend himself against the blows without ceasing to play chess. But he wants to play chess, because that is what he is trained to do and what he is judged on. When eventually the scientist stops playing chess and starts to fight his opponent sits down and then feigns annoyance when the scientist knocks the pieces over! The scientist has stooped to the level of his opponent, but the opponent is more nimble footed.

How does this relate to Nietzsche? Nietzsche argued that exceptional individuals are handicapped by conventional morality. The scientist, like this is unable to deploy his substantial intelligence in defence against the opponent playing by the rules of the media, where his is trying to uphold science. The scientist can get good at the media game. Alternatively, different people can play different roles. This is better because no one person can meet different audiences.

This is close to the situation with environmental organisations, often specialists in communication. The problem is, these organisations want to play a different game altogether.

Thursday, 8 July 2010

Fwd: big word

From: Stephen Stretton 
Subject: big word

pareidolia – a vague and random stimulus (often an image or sound) is perceived as significant, e.g., seeing images of animals or faces in clouds, the man in the moon, and hearing hidden messages on records played in reverse.

http://en.wikipedia.org/wiki/List_of_cognitive_biases
 Steve, I'd hoped to explore the idea that we can use our existing judgement faculties to make decisions about the future of mankind. I thought to explore applying the traditional notion of 'judgement'. So here goes.

Human judgement takes several forms, for example
 1.1) Judgement of fact in the present environment (is it the case that X is F?)
 1.2) Judgement of likely consequences of an action (would action A produce state X or Y?)
 1.3) Judgement of relative value of two potential states (would I like to be in state X or Y?)
 1.4) Judgement of relative value of two objects themselves (would I like to have item X or Y?)

It is because all these judgements have qualitative similarities, that it hasn't always been easy to see differences between judgements of fact and value judgements. But we ought to break down any non-elementary judgement in this way into its components, which may be sequential or parallel, interdependent or independent.

Judgements, as elementary operations, chain together to make decisions. In general, information from many sources is combined to give a single judgement - "I shall do X" - which attains the status of a decision. There are several ways in which we could apply our faculty of judgement to a 'how to act' problem:
Mode
 2.1) 'Gut' judgement, where the various judgements operate automatically in an appropriate sequence
 2.2) Reason - a controlled conscious organising of the order, inputs and outputs of each judgement into an argument
 2.3) In the abstract - where each step of judgement is restricted in its scope to the bare minimum, where all assumptions are made conscious, in addition to the contents of the judgements themselves.
 2.4) Other modes of employing judgement which are higher order, non-human, and therefore as yet indescribable.

Most arguments I have heard for saving the world rely on modes 2.1 and 2.2. This would not be an issue if we were making decisions about tomorrow's lunch. Although my choice for tomorrow's lunch could have an effect upon the survival of the world, that's not why I'm making the decision. The criteria are entirely different, justifying a simplified decision. But let us consider the case where we actually want to decide about the future of the world.
 3.1) "Would I like to be in state X?" where X is 30 years in the future?    (type 1.3 judgement)
 3.2) "Would I like to be in state X?" where X is 2000 years in the future?
 3.3) "Is action A likely to lead to state X?" where the action is now, and the state is in 30 years time (Type 1.2)
 3.4) "Is action A likely to lead to state X?" where the action is now, and the state is in the distant future
 3.5) multiply probabilities with state-value-judgements, generate action likely to put myself in state X

Although I have previously criticised 3.4, I would like to show the difficulties with 3.2. Here we have the usual scenario where one projects onesself into a possible world. Let us consider the following scenarios from these points of view:
 4.1) every living organism on the planet dies, and the world is permanently uninhabitable forever.
 4.2) every living organism dies, leaving a temporarily uninhabitable world (eg for 10,000 years)
 4.3) every human being dies, with a temporarily uninhabitable world
 4.4) every human dies, but the world is temporarily uninhabitable to humans but still inhabitable by many other organisms.
 4.5) a large proportion of human beings die, leaving a small number of people in a temporarily inhospitable world
 4.6) very few people die, leaving a large number of people in a permanently inhospitable (unpleasant compared to now) world
 4.7) very few people die, but the world is temporarily inhospitable for a period
 4.8) no death, but the world is permanently less inhabitable and the maximum world population is reduced forever
 4.9) everyone on earth permanently forgoes some amount of pleasure to avoid one of the above situations, and otherwise the environment is the same.
 4.10) we reduce the world population voluntarily (without death) to avoid one of the above situations, and environment is the same pleasantness
 4.11) nothing in the world changes, including population, happiness, environment - nobody forgoes anything.
 4.12) we reduce the world population so much to make the world more pleasant to live in
I've obviously left out lots of intermediate states, but you get the picture.


I think you will see, with our application of judgement in modes 2.1 and 2.2, it is actually not possible to choose the best of any particular given subset of the above scenarios. Clearly we need to do at least 3 things
 5.1) judge the probability of each of these scenarios independently by using factual judgements, contingent on actions
 5.2) judge whether I would like to be in each state
 5.3) convolve these to decide on action

Let's assume 5.1 and 5.3 can be performed, with care. We should, I grant, be using our 'best guess' human knowledge from fact, experiment and experience to make those judgements. But - What is the status of step 5.2?
 6.1) what constitutes a pleasant place to live in?
 6.2) will pleasantness judgements change with time?
 6.3) can subjective pleasantness be compared from person to person?
 6.4) is the absolute subsistence level relevant as a standard?
 6.5) will humans evolve so that what is 'hospitable' and 'inhospitable' now could reverse?
 6.6) will human technology improve in one situation and not another (mother of invention etc)
 6.7) will humans' minds evolve to higher levels, and attain an entirely different perspective on the situation?
 6.8) will another organism evolve to have an intelligence and society richer than humans have now?
 6.9) is it better to have fewer and happier people?
 6.10) how much happiness compensates for how much unplesantness/hardship?
 6.11) how can we move from 'would I like X?' to 'would one like X?'

This moves, of course, in the opposite direction to duty ethics, but it does point to where the normativity originates.
Clearly such judgements as 6.x need to be made, and I'm making things difficult for a good reason. Judgements of this kind, I argue, concern such a part of what it is to be human, that to render them with our faculties in modes 2.1 and 2.2 is ridiculous, and that what is needed here is further abstraction and dissection of assumptions (mode 2.3). The point is that, to the best of our scientific knowledge, the human mind as an organ of judgement will change over timescales of thousands of years. I believe that the most phylogenetically evolved mode is mode 2.3, and therefore it is closest to the mode of judgement that will be used by humans in the most distant conceivable future e.g. mode 2.4. (I use conceivability here in the common sense, not in the technical sense of my other conceivability arguments. In fact the conceivability of higher logic or judgement systems should probably be suspended until we have a more technical solution.)


My next post will go on to apply these 'difficult' judgements to subsets of scenarios, with a view to drawing a table of dependencies. I want to show how each fundamental assumption about the nature of evolution, pleasure as a brain faculty, additivity of utility, value of life etc. , impinge upon each type of scenario. As you will notice, I have phrased some of these questions as to sound empirical, and I will attempt to treat them as such using the tools of neuroeconomics.

--

Wednesday, 30 June 2010

Death in Hull

This blog is rubbish.


It is full of Oxbridge types who think they are smart or funny.


Which they ain't.