Thursday, February 10, 2011

The need for identity

 
It’s often taken as axiomatic, in any discussion about the human condition nowadays, that people need high self-esteem for optimum well-being. This would be very hard to measure, but it seems a reasonable assumption. Self-esteem breeds self respect and self confidence. These are, arguably, essential characteristics of well-rounded personalities who are comfortable in themselves and likely to be valuable members of their communities. Underlying all this is the need for a sense of identity: we’re unlikely to have high self-esteem if no-one knows or cares about us; we all need to be recognised by our peers and acknowledged as of value to our communities. We all need to be able to answer the question: ’who am I, and where do I fit into society?’

That we feel these needs is not surprising when we look at our evolutionary background. Throughout most of the period of human evolution and development, available archeological evidence indicates that we lived in small, self-sufficient groups in which the way of life, based on hunting and gathering food, did not change much for thousands of years. Humans with body form much like that we have now have been around at least since the Cro-Magnons emerged about 50,000 years ago. They were essentially fully modern humans in terms of skeletons, and the ability to produce tools. There is some evidence of advanced cultural traits, but these only developed slowly. ‘Technology’ changed very gradually. The Stone ages gave way to the Bronze and Iron ages, although not at the same time in all communities. Somewhere around 5 - 8,000 years ago the archeological records indicate the beginnings of agriculture in various places: plants were deliberately established for the food they could provide and some animals became domesticated. About 5000 years ago settlements and communities, characterized by higher population densities than those of the hunter-gatherer bands, became established in southern Mesopotamia – now Iraq. These mark the beginnings of civilization and recorded history and from this time cultural evolution proceeded much more rapidly. But all this cultural development has only been going on for – at most – about one-tenth of the time that humans have been around.

Human behaviour patterns, as they evolved over thousands of years, were geared to life within small groups where there was no problem about recognition or personal identity. Observation of hunter-gatherer groups that survived into modern times shows that the place and role of each individual in these was recognised and individual contributions to group survival and welfare were highly valued. The psychological well-being and the welfare of the individual were strongly associated with the welfare and well-being of the group, which may have been part of a larger grouping – a tribe. These recognition and behaviour patterns are embedded in our genes, in the sense that there would have been positive selection pressures acting in favour of those who conformed to them. This is why the punishment of exile was historically so serious: those who were banished from their community or tribe lost the protection of those communities and the support of their families; they did not belong anywhere and not only lacked the protection of their community, but were also likely to be psychologically devastated. With the establishment of larger communities in towns and cities, where population numbers gradually increased from hundreds to thousands (and are now in the millions), community bonds became weaker and it became harder and harder for individuals to retain their personal importance and identity. Nevertheless, tribal bonds and associations remain strong and important.

There’s plenty of empirical - or at least anecdotal - evidence for this need to belong: we just have to look at the way people behave. Most of our social activities are in quite small groups or organizations within which there are sub-groups that provide their members with their recognised personal niche and to which members frequently show (sometimes strong) loyalty. The ‘units’ that we relate to obviously overlap: people feel strong rapport with ‘their’ team, their town or state – with any group that reinforces their sense of identity: this is who I am; this is why I matter. Examples include sports clubs, church groups, professional societies and groups within workplaces. But we also identify with our country: patriotism in many countries may not be the force it once was (and still is in the United States), but jingoistic enthusiasm for our national identity surges up when it comes to support for sports teams or individuals – the feeling of ‘togetherness’, of almost fanatical tribalism generated by enthusiastic and excited groups, can be extremely strong.

Individuals have different status and importance relative to the other members of each of the groups, communities, or sub-cultures to which they belong. Their status at work or in a professional society may be entirely different from their status in the local sports club. The net result defines their view of themselves in relation to their society. Young people, without focus in society, especially young people who come from broken or dysfunctional families, are likely to join gangs, or indulge in behaviour which will be labelled antisocial by society as a whole, but which is driven by their need to be recognised, to have status within some group, to have an identity. Some of these groups might have bizarre initiation ceremonies and rites of passage. But, once in, you belong, and have the key to self esteem.

Another factor that might influence  our confidence and well-being is a sense of place. Those who grow up in the area where they were born are not only likely to be  strongly embedded in their local community – assuming it’s reasonably stable – but also, usually, feel strongly connected to the land, to familiar countryside, to the characteristics of scenery and season. The strength of this propensity is very clear from the completely subjective way people tend to extol the beauty and advantages of their country or, quite frequently, the part of the country they live in. There is a visceral bond, weaker in most peripatetic westerners, but astonishingly strong in Aboriginal peoples; for Australian aborigines ‘my country’ is of central importance – they feel the spirit of the country, of their ancestors. It defines their sense of identity; if they are taken from it the pull to return is clearly sometimes overwhelming, and when the tribal and community system in which they have their being breaks down, their lives become dysfunctional, directionless. We of Caucasian descent, immigrants long since displaced from our places of origin, nevertheless feel their atavistic attraction and tend to associate ourselves with those places, although we may know that we will never go back to them. The pull is weaker for us and, in most cases, not a dominant factor in our psyche and our view of ourselves; we have transferred our weaker sense of place to our adopted countries but the strength of the attraction and commitment is likely to vary depending on how long we have lived there, and whether we have stayed in one locality.

This quest for identity shows itself strongly among the young (many of whom are not dysfunctional), who use social networking packages such as Twitter and Facebook to ensure that they are ‘connected’ virtually all the time. Whether that electronic connectedness is an adequate substitute for genuine, face-to-face human contact and interaction is arguable, but it’s clear that many are driven by the feeling that it’s essential to be ‘on the air’ all the time. Some feel compelled to respond almost continuously to the flood of ‘tweets’ that clog their mobile (cell) phones, and if they haven’t updated their Facebook page for a few hours they become concerned that they are out of touch, that they might lose their (largely mythical) place in the virtual community in which they feel they have an identity. Connection is the goal. The quality of that connection, the quality of the information that passes through it, the quality of the relationships that connection permits—none of this is important. Social networking software clearly encourages people to make weak, superficial connections with each other, hardly likely to contribute positively to constructive social discourse. Most of those virtual friends are unlikely to care about the well-being of the individual, represented by a name and a photograph and some standardized, mostly trivial, social data. So we have to ask whether this kind of connectedness provides an adequate substitute for recognition and acknowledgement by real people in the real world. It seems unlikely.

The whole question of ambition and the urge of so many people to be seen, to be recognised, to have publicly acknowledged status, could usefully be analysed in terms of this human need for identity. Ambition, when you get right down to it, can generally be explained as the drive to be considered important, or interesting, to be identifiable by as many people as possible. Most of those who ‘bask in the public eye’ are concerned with their image and with this ‘recognition factor’. Of course there are people of high ability or achievement who don’t seek the limelight because they are quite comfortable with themselves and don’t require overt recognition and acknowledgement of who they are to maintain their self-esteem and sense of worth.

I think it’s reasonable to speculate that the gradual breakdown of families and family bonds in Western societies must be a major factor contributing to social problems in those societies. There have been social problems of various sorts ever since people began to congregate in large numbers in cities and towns, where individuals tend to disappear in the heaving anonymity of the mass, but the current narcissistic cult of individual satisfaction (‘Me first; what I want is the most important thing!’) and lack of any universally accepted system of morality and responsibility for communities – arising from the lack of recognition within communities –  are almost certainly major contributors to current problems. Strong, loving and supportive families, embedded in stable communities, minimise them, as does any supportive group, but it’s hard not to be pessimistic about the outlook.


Thursday, January 20, 2011

Can the young solve the world’s problems?


Storm clouds over Botswana - and the world?

There is a widespread tendency amongst the young to be impatient of their parents’ generation as old-fashioned, boring and out-of–touch with the modern world. And it’s common to hear older generations lament the behaviour and attitudes of the young. It’s also common to hear the confident assertion that it has always been thus.

In fact intergenerational friction – at least at the levels common in most modern societies – is almost certainly a relatively new phenomenon. It’s caused by the large and rapid changes affecting most aspects of contemporary life: the young adapt to the changes more easily than their parents and the older generation as a whole, so they’re likely to be impatient of that generation and it’s conservatism, while the old decry the changes and developments they do not always understand, nor see the need for. Many of them are incompetent users of computers and only use mobiles (cell phones) as telephones. They don’t understand or see the need for things such as iPods and the ‘apps’ that run on mobiles and tablets, and they frequently find the manners, dress, music and behaviour of the young unacceptable. But there are also less immediately obvious changes taking place, for which the older generation – largely the so-called baby-boomers, born in and between the 1950s and the 1970s – are responsible, but which many of them do not recognise or acknowledge. These involve resources and energy use and the impact of humans on the planet – matters that the young may be more aware of and concerned about than their parents. So there is plenty of scope for intergenerational friction.

This was not the case through most of human history. For long periods – let’s say for a couple of hundred thousand years – human societies were remarkably stable, changing only very slowly. Most people lived in small communities where the generations, like the seasons, succeeded each other without much change. There were blips in the cycles, caused by famine or plague or war, but these disasters were part of the accepted patterns of existence and were accepted as such. They did not bring about changes in the way people lived. For thousands of years there were few changes in the technologies used in agriculture, where most people worked and most tasks were done by hand or with the help of animals, or in methods of travel or communication. For most people, except for the rich and the ruling classes, who travelled on horseback or in some sort of (usually horse-drawn) conveyance, the only way to travel was to walk, so few people left the areas where they lived. There was, generally, none of the interchange of ideas that may come from travel. Few people were educated; work began early in life and the work done by children was often important for the survival of their families and communities. There were no books; writing in any form only developed about 5000 years ago and the first printed books (the Gutenberg Bible) only appeared in the middle of the 15th century.

And so we could go on, and if we were inclined to get involved in detail, we might argue about differences between regions and races, or about the influence of armies and empires that may have absorbed the culture of the conquered, or transmitted to them the culture of the conquerors. Or we might argue about the relationships between generations that might have pertained in advanced cultures with complex societies, such as the ancient Chinese, or the Mayo and the Aztecs in South America, or the ancient Egyptians, and primitive cultures in Africa or parts of south-east Asia. But it is probably safe to say that, in most of these societies, rebellious behaviour by the young would have received short shrift: over most of human history cultural change was slow and, from generation to generation, young people grew up (if they survived) with the same beliefs, prejudices, superstitions and ideas about the way life should be lived as their parents and those around them. They followed, without question, the old-established patterns of living. In primitive societies life was an uncertain business and survival was the first priority. Few got the chance to grow old and there wasn’t much time or energy to spare for apparently pointless rebellion. The experience of those who did survive offered the best guide to living and the young followed as best they could – or were sternly disciplined to do so. The rewards, in the form of status, prestige and possibly authority, came with age and experience.

But gradually, at least in Europe, the long darkness of pre-medieval times began to lift. For most people the social changes that came were imperceptible, but as populations increased and cities grew larger the number of people in merchant classes, who traded with distant countries, increased. Communications improved as written material became more common among the merchant and ruling classes. The invention of the printing press in Europe in about 1450, and the advent of printed books, of which the Bible was the first, were epoch-making events in history. Before the invention of the printing press ownership of a bible, or any book, was rare. But as the presses proliferated not only bibles, but other books, became available to increasing proportions of the population, and the information available to people increased rapidly. This in turn stimulated the development of literacy. By the end of the 17th century novels and story books, technical literature and political pamphlets, were becoming commonplace. The number of people who could read, although it remained small, increased gradually. Newspapers and magazines appeared in the 18th century and by the 19th century literacy was sufficiently widespread to create a market for a cheap press, which in turn led to the development of advertising.

Increasing literacy and the flow of information that came with newspapers led to increasing awareness of the world outside the restricted confines of small societies. This, coupled with the massive changes brought about by the industrial revolution of the 19th century, and by the world wars in the first half of the 20th century brought about immense and rapid changes in society, culminating in it’s complete disintegration, in Europe, at the end of World War II. The United States, untouched directly by the war – the country was not invaded or bombed and there was no military action on the American mainland – experienced an industrial boom triggered by the massive production of aircraft, tanks, ships, weapons and all the material needed by the military. After the war the energy and resources that had been focused on all this, and the return to civilian life of more than a million ex-servicemen, led to an unprecedented surge in production of consumer goods.

The Marshal Plan, by which the United States provided enormous amounts of aid to Europe, underpinned the astonishing economic and social post-war recovery there. Recovery in the Soviet Union was slower, and was distorted by the Communist emphasis on heavy industry and impractical production targets. Communism also imposed a grey uniformity on populations; dissent was ruthlessly quashed and personal freedom severely restricted. But in the United States and Western Europe (particularly), burgeoning technical innovations, and production of consumer goods went hand-in-hand with rapidly changing social attitudes. Ideas about social duty and obligation to society came to be replaced by the cult of the supremacy of the individual; the doctrine that personal freedom and the satisfaction of their every whim and want was the highest social priority  came to be accepted as inarguable.

The generation that grew to maturity in Europe in the 1960s and ‘70s wanted nothing to do with the world their parents had known, a world of war followed by years of shortages. They remained in school for longer than ever before and ever- increasing numbers were university-educated which, in itself, created a gulf between them and their parents. Theirs was a world of material satisfactions, of fashion and music and television and, increasingly, of self gratification. Many became impatient with their parents and their attitudes and ideas, impatient of their conservatism. The young wanted something different; utopian ideas about changing society spread, leading to dissent from conventional expectations and attitudes. Young people in the post-war United States were endlessly indulged, endlessly told they could achieve anything they aspired to in the land of unlimited opportunity. The materialist American way of life was assumed to be the ultimate good life but it produced a backlash: the feeling that there must be more to life than this led to contempt for establishment attitudes, dress and manners and exploded in the protests against the Vietnam war, in the counterculture movement, ideas about free love, student protests and the appearance of hippies.

Thirty years later much of the turmoil has subsided: economic pressures have replaced youthful idealism – owning a car and a house and all the electronic gizmos and gadgets that characterise modern life in the developed countries has become the primary objective for most people. The idealists and hippies have succumbed to middle-aged conservatism and the consumer life-style. Materialism rules. And that goes for their children, who want all the toys – and they want them now! – as well as untrammeled personal freedom. The ‘me first, and I want more’ attitude is almost universal. Education is not about ideas; it’s about acquiring the skills needed to accumulate wealth. So the objectives and priorities of the generation currently in their late teens and early twenties have converged with those of their baby-boomer parents. But that doesn’t seem to have brought the generations closer psychologically and emotionally. The young are still impatient of their parents, but for different reasons. They still want to ‘do their own thing’ and, because they are given remarkable freedom, any attempt by parents to impose control is resented. And the parents are likely to be regarded as old-fashioned because they decry modern music, may not be computer literate, or appreciate or be interested in Facebook and Twitter or spend a large part of their time ‘connecting’ through their cell phones. So the gap between the generations is maintained – or, indeed, may grow – despite the fact that, when life gets difficult many of the young turn back to their parents for support. (The situation is different in so-called underdeveloped countries, where the young aspire to the modern consumer lifestyle and material comforts undreamt of by their parents.)

All that is rather a long-winded way of making the point that (in my view) the generation gap is a modern phenomenon. Does it matter? Well, yes, it does; in fact, in some respects, I think  that there must be a generation gap. The world is changing faster than at any time in history and the changes are causing enormous problems. The flexibility of youth allows the young cope with and adapt to change better than older people, and there are encouraging signs that many of them show signs of recognising and adapting to the emerging reality that the affluent life-style in developed countries – and the life-style of the affluent in underdeveloped countries – cannot continue into the indefinite future. Things cannot go on as they are now. There will be shortages of energy and other resources – including food, although that is unlikely to affect affluent societies for some time yet. So, we need young people who will reject the complacency – or shortsightedness – of their parents and acknowledge all this, since problems can’t be solved unless they are recognised and acknowledged. Even my cursory scan through history is enough to show that human societies are not stable; in fact if we think about the explosive growth of human populations we don’t need to know much history to know that things are changing – and the changes are coming fast. If the consequences are not to be catastrophic – or at least extremely unpleasant for multitudes of people, and disastrous for the world’s ecology – the young people who will have to live with it all have to solve the problems we have bequeathed to them.


Monday, December 6, 2010

Electronic social networking

Australian magpies - they converse a lot, perhaps more intelligently than much that is posted on Twitter
 Young people, particularly but not exclusively, are heavy users of various electronic social networking systems, of which Twitter and Facebook are currently the most widely used. Most of them will also spend a great deal of time in front of computers, surfing the net. Is all this good?  There will be many who argue that it’s an intrinsic part of modern youth culture, so what’s the problem?  Well, the technologies are not, in themselves harmful, and they can be useful, but in exacerbating the developing tendency of adolescents to require constant stimulus and distraction from anything needing focused effort, they can be said to be dangerous to society. Let’s consider the arguments.

Twitter1 is a website through which users can send messages – called tweets - and read those of other users. Tweets are text-based posts of up to 140 characters displayed on the user's profile page. They are publicly visible by default; senders can restrict message delivery to lists of nominated friends, but reports of the way Twitter is used suggest that the idea of being at the centre of some hypothetical community of friends attracts the attention-seekers as well as those who have difficulty – perhaps because of their mobile phones and computers – interacting with real people. So they make their tweets available to the world.

Facebook1 is a website on which users may create a personal profile, add other users as friends, and exchange messages, including automatic notifications when they update their profile. They may also join common interest user groups, organized by workplace, school, college, or other characteristics. Facebook was produced in 2003 at Harvard by a nerdy student called Mark Zuckerberg, dubbed a ‘social autistic’ by Sadie Smith2, and has spread like a flame through the combustible world of the would-be-connected young: there are now an estimated 500 million users.  The idea of ‘friends’ is central to Facebook; in pursuit of connectivity users post information about themselves that may include the minutiae of their daily lives and blow-by-blow descriptions of their activities, as well as intimate details of their lives.

The question is, I think, what is the point of Facebook and Twitter? Do they serve any useful purpose?

Surprisingly, Twitter, with its 140 character restriction, comes out ahead in relation to that question. At the trivial level a friend pointed out that, having suddenly decided to dine out in New York the question of which restaurant provided good food (and good value?) was solved by sending out a tweet asking about restaurants in the area. Several replies (from unknown people), with recommendations, were received within five minutes. Not an earth-shaking result, but indeed useful and indicative of a general class of queries likely to elicit responses from the ‘Twittersphere’. But it’s a rather startling thought that – even in New York – there were enough people checking Twitter at the time for some of them to know about restaurants in that particular area and be prepared to reply. At an equivalent level, tweets and text messages are sent into television – and I suppose radio – stations where there is discussion of some subject or issue in which people are interested. So the senders are immediately involved (or at least they feel as if they are) and get the chance to contribute – although hardly in depth.

More usefully, Twitter is credited with providing the vehicle by which young people in Iran were able to tell the world what was happening to them as Iranian police and Muslim fundamentalist thugs broke up their demonstrations in favour of ‘non establishment’ candidates in the 2010 elections in Iran. In effect the twitterers provided a news outlet and focussed the world’s attention on events that were not covered by conventional western media, which might otherwise have been ignored. Some believe there is a Twitter revolution under way in Iran, and that the technology is having a significant effect on the rulers of the country, who have a considerable problem limiting the flow of information. However, we must recognise that there is no way of establishing whether a tweet is true or false, and no way of confirming the location of the sender, so there must be debate about how much of the information coming out of Iran is genuine, and the extent to which it reflects the views of the population3. Nevertheless, since I subscribe to  the idea that a free press is an essential component of a free society, I must accept that Twitter has the potential to make a contribution that outweighs the triviality of most of its uses. (In this respect I noticed, in the weekend papers, that a Canberra academic is the target of a lawsuit because of a (allegedly libellous) remark she posted on Twitter. Even if the lawsuit fails one must question the judgement of a – presumably intelligent – person who does something like that.)

In the case of Facebook, the main argument that might justify its existence is that it helps to satisfy the basic human need to belong, to have an identity as a member of a group within which we are recognised and valued. This provides the basis for self-esteem. Families are the most immediate and important groups in this respect: in traditional societies the local community provides the milieu within which families and individuals are embedded and have their recognised and acknowledged places, but the gradual breakdown of families and family bonds in Western societies means that many young people are denied that support and recognition. So they look for something else among their peers and associates and Facebook seems to offer a community of friends, a community within which they can say: ‘This is who I am; this is why I matter”. It also provides instant feedback from someone, somewhere, 24 hours a day, and this constant reassurance comes without the stress of real-life, face-to-face conversation. Jaron Lanier2 argues that, on Facebook people “reduce themselves” in order to make a computer’s description of them appear more accurate. But there is no perfect computer analogue for what we call a “person”: the attempt to turn life into a database is a degradation, based on a philosophical mistake. Computers cannot represent human thought or human relationships, therefore recognition of otherwise unknown ‘friends’ by Facebook is based on incomplete information, besides involving no significant personal commitment.

There are major disadvantages to both Twitter and Facebook. Much (probably most) of what is posted on Twitter is trivial in the extreme. It has been labeled ‘pointless babble, better characterized as social grooming’[1] and, more serious than the fact of its triviality, there are strong indications that, among the young, particularly, Twitter and Facebook engender the feeling that it is essential to be in contact all the time, that you must respond to the flood of tweets and if you haven’t updated your Facebook page for a few hours you are out of touch. Connection is the goal. The quality of that connection, the quality of the information that passes through it, the quality of the relationships that connection permits—none of this is important. Social networking software clearly encourages people to make weak, superficial connections with each other, hardly likely to contribute positively to constructive social discourse. There is an inability among the young, addicted to these technologies, to develop empathy. This cannot develop through social networking because we are not aware of how other people are really feeling  -  we cannot pick up on body language when we are communicating through a screen4.

A consequence of the addictive urge to be ‘connected’ was highlighted by an article in the New York Times about the impact of these social networking technologies on schoolchildren in California. Students are constantly distracted or inattentive: responders to the article overwhelmingly decried the effects of the technologies on school work. A contributor to the NY Times discussion said that many of the descriptors of technology-infused school kids match those of criminal personalities: inability to maintain concentration on necessary tasks, need for constant and instant gratification (being bored is an offense!), and worst, failure to appreciate the necessity of personal investment of time, attention, and effort in order to accomplish anything worthwhile. It is clear that, uncontrolled – as they mostly are, at least by America parents – these technologies pose a profound new challenge to focus and learning: apparently students can’t think independently or originally, and information, if it isn’t coming to them from a screen, is ignored as having no value.
Baroness Susan Greenfield4, an eminent British neurosurgeon and scholar, has expressed serious concerns about the need felt by the wired, cell-phone generation, for instant gratification: the whole concept of chat and texting, bypasses the most fundamental human communication – actual conversation. And because, in technologies like Facebook and Twitter  ‘pithy allusion substitutes for exposition5, the art of conversation, of interesting development of an argument or an idea, is clearly endangered in this socially uncoupled generation. In the profession I followed, clarity and accuracy of expression are essential in the business of conveying sometimes complex ideas so they can be understood, tested, refined and either discarded or promulgated. This is hardly likely to be a skill developed, or appreciated, by those for whom sloppy inaccurate expression  and abbreviated, unpunctuated telegraphese substitute for disciplined written communication. Shoddy prose bespeaks intellectual insecurity: we speak and write badly because we don’t feel confident of what we think and are reluctant to assert it unambiguously (“It’s only my opinion…”).

Social networking sites apparently tap into the basic brain systems for delivering pleasurable experience. In the case of Twitter, where users post an almost moment-by-moment, stream-of-consciousness account of their thoughts and activities, however banal, the addictive nature of the activity is much like traditional sources of instant gratification  -  sex, drugs and drink. The compulsion to know what other people are doing and thinking and feeling is, arguably, a form of voyeurism, although for many people, the idea of describing your thoughts and actions in minute detail is absurd. Why would you subject your friends to that, and how much trivia can you absorb?
The growth of ambient intimacy can seem like modern narcissism taken to a new, supermetabolic extreme—the ultimate expression of a generation of celebrity-addled youths who believe their every utterance is fascinating and ought to be shared with the world6.

There are other problems, or potential problems. One of these (depending on your point of view – you may think it’s acceptable) is targeted advertising. Because the personal information on Facebook sites is publicly available it can be used by companies who can target their advertising very precisely to groups or regions – another step in the commercialisation of our world and encroachment on our privacy. Where once the Internet seemed an opportunity for unrestricted communication, the increasingly commercial bias of the medium—”I am what I buy”—brings impoverishment of its own.

Clearly these technologies are here to stay; they have their uses and those will no doubt expand.  They have hundreds of thousands of users – although mass adoption does not, in itself, constitute a logical or intellectual argument in their favour. For those who have reached ‘responsible adulthood’ (which research indicates is about the age of twenty-two in males; younger in females) Facebook, Twitter and their ilk may well be used sensibly for profit (in the widest sense) and pleasure. Used sensibly they undoubtedly provide agreeable methods of communicating with real friends, and Twitter may be a useful source of information. The dangers they pose in relation to the inability of kids to concentrate, endless time-wasting, low social skills and possibly poor ability to develop strong personal relationships, are part of the spectrum of problems generated by computers and mobile phones in general. The dangers to children can be overcome (although in most cases they won’t be) by parental control and discussion, talking to each other, encouraging reading real books with substantial content, discussing implications of the technologies. In relation to time-wasting, if you are among those who think that addiction is not a particularly harmful condition, and that spending time constantly updating your activities for the benefit of Facebook ‘friends’, or sending bursts of inconsequential information into the ether, to join the mass of equally banal, inconsequential and short-lived information swirling through the it, then there’s nothing to argue about: that’s a value judgment, not an objectively arguable fact. I would just have to say that those are not values I subscribe to.

At a more basic level these electronic social technologies cause us to examine our assumptions about the purpose of life and how we should – in some fundamental sense – spend our time. Is time spent exchanging useless information via Facebook and Twitter wasted any more than time spent watching rubbish on television? Probably not. But the question pushes us back to the idea of a fulfilled life, of reaching our physical and mental potential. Both need work and self-discipline. In the long run time spent doing something worthwhile will be  far more rewarding than time frittered away in exchanging trivia. Strong, rewarding human relationships will be far more rewarding than the accumulation of large numbers of ‘friends’ on Facebook, most of whom mean nothing and do not meet the criteria of genuine friendship.



Sources consulted


1 Wikipedia
2 Sadie Smith: “Generation Why?’ NT Review of Books, Nov. 25.
Jaron Lanier was cited by Sadie Smith.
3 Annika Wang: ’Twitter and the Iranian elections’, CIMA, Nov. 2009.; Daily Mail, Nov. 2010.
4 Report of a speech by Baroness Greenfield: “How Facebook addiction is damaging your child’s brain.
5 Tony Judt: ‘Words’ NY Review of Books, July 2010.
6 Clive Thompson. NY Times Magazine. Sept. 7, 2008.
 

Monday, November 29, 2010

On friends and friendship


Born, brought up and educated in Africa, we lived in Zimbabwe (then Rhodesia) and South Africa. Pursuing my career in science we moved to Scotland, England and later Australia, taking our children with us. The friends we accumulated along the way tended to reflect the stage of our lives at the time: when we were young parents with small kids, most of our friends were people from the same group – small children and their requirements take up a great deal of time and attention, so it’s easier to make friends with people who have the same preoccupations and who find children and their doings subjects of engrossing interest. As our children grew interactions with other parents decreased, or at least changed as the children themselves influenced the friends we made and it became necessary to meet the parents of their friends, and the parents of those with whom they shared activities, on a different basis. And, of course, there were always friends and acquaintances from entirely separate groups, such as people we met through work or sport.

Friends and colleagues: Paul Joe Dick Sune. In Estonia, at working meeting; 40 years since we all first met at a conference - and it looks like it! Work and social groups have overlapped over the years.
We use the word ‘friend’ quite loosely in relation to people ranging from acquaintances, who don’t really mean much to us and to whom we are not particularly important, to those with whom the ties of shared interest and mutual concern are strong and durable. The difficulty, quite often, is to know the difference. Real friends, the ones that matter, are willing to do things for you that might involve inconvenience for themselves; they’re seriously concerned about your welfare, well-being and happiness. They’re there for the long haul, so even if you don’t see each other for years, when you do get together it takes very little time to re-establish the relationship, to pick up where you left off, fill in the gaps and enjoy each other’s company.

So there are gradations in all this; most of us have a range of different kinds of friends, from the solid, intense, important relationships to the unimportant. There are people whose company we really enjoy; others with whom we’re happy to spend time when it’s mutually convenient, and there are people we meet who are irrelevant to us except insofar as they are fellow-members of the human race and deserving of consideration for that reason. The more casual friends may be fun to be with, when all is well on both sides of the relationship, but we are likely to find that, in many cases, we’re not important enough to them to make them ready and willing to put themselves out for us – and that may include not being willing to provide a sympathetic ear when we want a confidant. It’s a two-way business, of course: there may be times when we’re not interested in a friendship developing beyond the casual, but there will also be times when we put quite a lot of work into a relationship to find that the interest isn’t really reciprocated. It hurts, but in those cases we may as well cut our losses and walk away.

Regardless of whether we have led a peripatetic or stable and locally-rooted life, the patterns of friends and friendship groups change with age. Those who live their lives in one place, who grow up and grow old in the community they were born into are likely, I suspect, to have fewer friends than those who, like me and my family, have moved around. They will have a much more settled sense of place, of belonging to a community that defines their place in the world. No doubt old friendships solidify into comfortable patterns of long-settled mutual regard with well recognised problem areas that can be avoided, as well as highly-valued areas of shared experience. And, it seems reasonable to assume, the overlap between different groups of friends and acquaintances is much greater when most of the people involved have deep roots and multiple connections in the community. Nevertheless the friends of youth grow older and change, interests diverge, relationships with others affect friendships, people move away, or die. Society changes and everyone is affected.

And there is another point: not only do we change with time but, because we tend to project ourselves in quite different ways in different environments, the person we seem to be within one group may be slightly different from that in another. So the way we’re seen by the people in our local sports club may be entirely different from the way we’re seen by our workmates or by our family and neighbourhood friends and acquaintances.

For the transients, who move in and out of communities, the problem is to develop friendships that are meaningful, relationships that matter, in each of the communities in which they find themselves. And, having developed such friendships, then – as we did so often – they may have to be left behind. Furthermore, even if the ‘residence time’ in various places is measured in years, the connections and overlaps between the groups associated with work and sport and family and neighbourhood communities, if they exist at all, are likely to be much weaker than they would be for people who stay in one place for all most of their lives.

The whole process of making friends gets harder as you get older: the children leave home; you retire and move away from, or lose contact with, your community of workmates; your recreational activities become more restricted – perhaps you no longer play tennis or golf in the pennant competitions, or you drop sport entirely in favour of the bridge group, or whatever. People you meet are more set in their ways; they have their friendship circles and may not be much interested in expanding them, they may make polite enquiry about your family but are not much interested in hearing about them and their doings. So the real friends scattered down the years and across the world hold their value, even if their direct impact on our day-to-day lives is now miniscule or non-existent. They justify the hassle of travel.

Monday, November 15, 2010

When should we die?

Sunset from 9-mile beach, Western Australia. Seems appropriate to the subject!

This doesn't sound like a very cheerful 
subject, and I guess it isn't, really. But it's easier to get through life if we face the problems that it throws at us, and look for practical solutions. This is not an attitude that our (western) societies are good at; we'd much rather - generally - duck tough issues and hope they'll go away. Or we take refuge in platitudes and spout half-understood aphorisms that allow us to pretend we have a solution. So I thought it might be interesting to look at the problem of how we, and our society, approach the last days of our lives.
The stimulus for this particular polemic was a conversation with a friend of mine, whose mother-in-law is dying. She's an old lady, with dementia, can't look after herself and has largely lost control of her bodily functions. She now has an ugly infection in one of her eyes. It can't be treated, and the doctors are concerned, apparently, that the infection will spread down an optic nerve into her brain and kill her. The only possible treatment would be to remove the eye.  But why would you do that? What good could it do? The doctors are hamstrung by legal requirements to preserve life, and it seems that's what they have to 'officially' recommend.  No-one wants to make the obvious decision, which must be to do whatever is necessary to ensure she is as comfortable and free of pain as possible, which is likely to involve, as I understand it, large doses of morphine or some similar drug, but take no positive, 'heroic' and expensive action. If the morphine doses are sufficient to 'snuff out the flickering flame of life', well, so be it. The result will be better for all concerned, not least for the old lady lying there in a painful, confused and hopeless little heap.
This discussion is, of course, a 'sub-set' of the arguments about euthanasia which, being an active process, has all sorts of additional complications. We won't go there - at least not this time. My argument is simply that, for all of us, there is a time to die, and nothing is gained by postponing it for a while at the cost of pain, indignity, inconvenience, unpleasant work for all those who have to look after the dying, and the expenditure of ridiculous quantities of material resources - as well as occupation of hospital beds that could be better used by others. There are numerous tales similar to the one I have told here, about people dying of cancer, who go through round after round of unpleasant, inconvenient treatment, often with unpleasant side effects, to (possibly) prolong life for a few months. Why do they do it?

The answer, I suggest, is because that's what's expected. We rush for treatment, and once we're in the hands of the hospitals we tend to lose control of the process.

Underlying all this is the idea that human life is somehow invaluable. This is quite frequently asserted as if it was completely inarguable. It derives, I believe, from the underlying fear we all have of dying, so we argue that every life - which of course includes ours - must be preserved as long as possible, whatever the cost. This has been developed into doctrine by Christianity and permeates western societies. Well, I think it's a stupid assertion/doctrine/position, which can and should be qualified. When the time comes to die, it would be good to accept the fact and do it well.


Autumn at Withycombe - colour at the end of the line 

  P.S. Two posts today do not indicate a rush of creativity (if that's the right word); just that I had them done and got around to posting them - it's been raining all day.



African animals

Giraffes in Botswana - May 2009


Years ago, before Diana and I were married, we went off in my little Vauxhall – you have to be quite old to remember them – to spend a few days in Wankie game reserve. (That was before Rhodesia became Zimbabwe, when Wankie became Hwangie, but that’s beside the point, as is the fact that my mother was horrified: we WEREN’T married!)

Not long after we entered the reserve we looked up to see, regarding us quizzically over the top of some quite large acacias, the heads of two giraffes, quite close to us. Nothing extraordinary, but the picture is indelibly printed in  my memory. Big eyes, long noses, jaws rhythmically chewing the cud. Graceful necks and astonishing legs. How did all that evolve? They were just the first of the wonderful animals we saw, some of them very close to the car – like the lioness lounging a few metres from the road side, not interested in us or the large herd of buffalo on the other side of water held back by an earth dam.  We watched elephant drinking and splashing about in another dam, in the evening, so close that I rather nervously turned the car so we could move away if they came TOO close. In between times there were all the usual beautiful antelope – impala, kudu, spectacular sable and the little ones, duiker and stembuck. Warthogs, running with their ridiculous tales in the air, were amusing then, and remained so to us over other visits to African wildlife parks, over many years.

You can still go to Africa and see the charismatic megafauna, the birds and antelope, hippos and crocodiles in the rivers but, as human populations increase rapidly, wildlife numbers are crashing across the continent. The reasons are well known: poaching, habitat destruction, direct competition between humans and animals – people who depend for their survival on crops and domestic animals don’t take kindly to either being eaten by wild animals – and all sorts of ecological imbalances. And it’s not just wildlife that is suffering from the impact of rampant human reproduction; across vast areas rural Africa ecosystems are being irreversibly damaged – trees are cut, overgrazing destroys vegetation, soil erosion eats away at the topsoil. There are all sorts of well-meaning, and undoubtedly valuable, programs and groups concerned with halting the degradation and loss of wildlife, but we seldom see any attempt to come to grips with the fundamental, underlying problem: too many people.

Shock; horror! Politically incorrect to an alarming degree! What am I saying? That there should be mass culling of humans? Well, of course not, but it does seem that any discussions of African populations in international forums are circumscribed by furious assertions about racism from the African politicians. This accusation is a throwback to the 1960s, when many African countries were struggling to get rid of colonial rule by various Europeans, but it’s irrelevant now. You can’t solve problems unless you face up to them and pretending that the human population explosion across Africa isn’t a problem is sheer stupidity. If Africans are to enjoy reasonable standards of living they have to stop having so many babies. They can’t expect to achieve the profligate and unsustainable standards we in ‘the west’ indulge in, but they can certainly aspire to better than most of them now have.

The usual answer to this question of population control is that there must be economic development, and a focus on the education of women. Then the women will want, and be able, to control their own fertility. But that’s a whole different discussion. I guess the point I want to make here is a more philosophical one: why do we humans think our priorities and requirements for living take precedence over every other biological organism? As part of this attitude we assume it’s unquestionable that the earth’s resources should be exploited to meet our needs, and also frequently make assertions that we should not set limits on the resources that may need to be expended to save a single human life. That’s absurd. But what are the limits, and what determines them? This seems to be a philosophical black hole into which most attempts at rational discussion of the fundamental human dilemma caused by our success as a species disappear.

It’s hard to see solutions. And African animals are only one symptom of the problems. There will be wonderful animals wandering around Africa, doing their thing, for years yet, but I’m not sure my grandchildren could go to that continent and find more than traces of the superbly complex and rich environments that were there not so very long ago. Sad, sad, business.  They have as much right to their time on this planet as we do, but their date with extinction is being brought forward rapidly.

Go well

Joe

Thursday, November 4, 2010

Paranoia

Castle on the coast of Estonia. Does this symbolise what we're afraid of?




In the last few days there have been the usual breathless news headlines associated with terrorist threats: small bombs originating from Yemen, addressed to a synagogue in Chicago were found (following a tip-off) in cargo planes. One of the planes landed in Britain and the bomb was located there. This allowed the British Prime Minister, David Cameron, to express the view that it was possible the bomb(s) was intended to explode while the plane was over Britain. I wonder if he had any evidence whatsoever for that suggestion? I doubt it. Like every politician, and most of the media, Cameron seemed to find it necessary to feed the paranoia about terrorism that appears to be universal in western countries. Of course there might have been some benefits to Cameron in playing the very common game of the politics of fear; it would provide him with an opportunity to strut his stuff as the defender of public safety. (‘Look how concerned your government is…!’)

Yes, there are terrorist threats, emanating primarily from Muslim countries. And yes, we have to combat that terrorism and, as far as possible, take whatever actions are necessary to ensure that planned attacks are not successful. But do we have to hit the panic button to the extent we do whenever an attempted bombing is foiled – or even when they’re successful? The panic and system gridlock in the United States after 9/11 was unadmirable. If I was a terrorist and wanted to disrupt the economies and pattern of life in western countries I would, every now and then, ‘leak’ to the western media hoax warnings that attacks were imminent giving vague, but convincing, information about their type and probable targets. There would be a good chance that these would result in a flurry of excited reports in the media, and possibly shut-down of airports and all sorts of expensive searches and precautions. (I assume that this is, in fact, fairly common. We frequently hear of plane delays etc. because or warnings about non-existent bombs.)

The point is that our responses to such threats, whether real or mischievous, are out of proportion to their implications. But, but, but… I can hear the outrage! People could be killed! Yes, indeed they could, and probably will be. The chances are that there will be more successful attacks such as those on the Twin Towers in New York (9/11), on restaurants in Bali in 2002, on commuter trains in Madrid in 2003 and on the Underground in London in 2005. But does paranoia help solve the problem? Clearly not. And should the prospect – or the reality – bring our societies to a grinding halt? Equally clearly, not. Western security forces have to keep working in the background to foil these things, as they frequently do, and we do need security at airports (although whether that should run to full body scans and searches is arguable), but in most cases we don’t need to shut up shop and cause enormous inconvenience and expense. Life must go on.

Let's get this in perspective. The most successful of modern terrorist attacks (9/11) killed about 3000 people. But every year Americans kill about 10,000 of their fellow-citizens with handguns, and wound another 50,000 – not to mention about 20,000 accidental woundings and 15,000 gunshot suicides. Yet the  vociferous and successful gun lobby manages to persuade the congress (and the Supreme Court) that owning a gun is an inviolate right under the (2nd amendment of the) Constitution. We might also look at things like road death statistics in most western countries, and deaths from avoidable self-abuse like smoking. Where’s the logic in it all? Why don’t the Americans wage a war against their own bizarre (lack of) gun laws instead of against Iraq, where they killed a few hundred thousand people and destroyed the government of a country (albeit a rotten dictatorship) in the course of President George W Bush’s ‘war on terror’? It’s also hard to argue that the war in Afghanistan, intended to control/reduce terrorism, is serving that purpose. And why isn’t cigarette smoking banned? (The answer is obvious.)

I’d like to make two points: one concerns the question of probable risk; the other – peripheral to my main argument here, but of some interest – concerns how terrorism might best be fought.

I don’t have data quantifying probable risk but there’s no question that, for the average person, the chance of being killed or injured in a motor accident is hundreds of times higher than the risk of being killed or injured by a terrorist bomb. We accept that, and many other risks, and live with them because we value our cars and are prepared to take our chances and pay the price. I wonder what the economics of road safety campaigns are: how much is spent per life lost on the roads, relative to the economics of public paranoia about terrorism – i.e. how much is spent on security, how  much time is lost and inconvenience caused, per life lost to terrorist bombs?

Our paranoia is not confined to the risks from terrorism: we are obsessed with safety and risk avoidance in every aspect of our lives. There are constant demands for precautions against all manner of real and imagined hazards, ranging from absurd regulations against asbestos in buildings (even if it’s covered in paint and tucked away somewhere) to safety at work provisions that range from the sensible to the ridiculous, and the endless strictures on the packaging of almost everything we buy. Considerable imagination is sometimes required to think of how an item can be dangerous, but you can be sure the manufacturers will warn against every real and imagined hazard in their eagerness to cover their asses against legal action by idiots who are convinced that life should be free of all risk, but who have managed to hurt themselves and want someone to pay for it. (Those same idiots will die, in due course, like everyone else.) The problem is finding the right balance between sensible precautions and acceptable risk, but, there’s no indication that our societies are likely to find that balance – if we got anywhere near it there would be a good chance special interest groups would protest vociferously that their particular obsessions must have exceptional treatment. Balance doesn’t look like a sensible option through the blinkers of uncritical bias!

And so I could go on, but let’s get back to terrorism. If you read books or articles by people who understand the problem – its causes and scope and the best ways of combating it – the general opinion is that the most effective counters are not high-tech surveillance (although that has a place) but recruitment and training of people familiar with the language and customs of groups who may be considered threats. These people are introduced into the societies we are concerned about, to ‘keep their ears to the ground’. Like terrorist ‘sleeper’ cells our agents may be in place for years without taking action. But we need lots of them. The point made at the beginning – that the most recent bomb threats were ‘defused’ (!) as a result of a tip-off – supports this argument. We should also be studying the societies of concern, learning to understand their concerns and aspirations, helping to solve their problems. Agents and diplomacy and well-targeted aid (if that’s not an oxymoron) are cheaper and more effective than bombs and cruise missiles and predator drones, which frequently make the problem worse; they can legitimately be regarded as terrorism by those – often innocent, like western victims of terrorist attacks – who are their victims, or the relatives and friends of those victims.