In "We are hopelessly hooked" (New York Review of Books, February 25), political historian Jacob Weisberg canvasses the social impact of digital technology. He describes mobile and social media as “self-depleting and antisocial” but I would prefer different-social not merely for the vernacular but because the new media's sadder side is a lot like what's gone before.
In reviewing four recent contributions to the field - from Sherry Turkle, Joseph Reagle and Nir Eyal - Weisberg dwells in various ways on the twee dichotomy of experience online and off. For many of us, the distinction between digital and "IRL" (the sardonic abbreviation of "in real life") is becoming entirely arbitrary, which I like to show through an anecdote.
I was a mid-career technology researcher and management consultant when I joined Twitter in 2009. It quickly supplanted all my traditional newsfeeds and bulletin boards, by connecting me to individuals who I came to trust to pass on what really mattered. More slowly, I curated my circles, built up a following, and came to enjoy the recognition that would ordinarily come from regular contact, if the travel was affordable from far flung Australia. By 2013 I had made it as a member of the “identerati” – a loose international community of digital identity specialists. Thus, on my first trip to the US in many years, I scored a cherished invitation to a private pre-conference party with 50 or so of these leaders.
On the night, as I made my way through unfamiliar San Francisco streets, I had butterflies. I had met just one of my virtual colleagues face-to-face. How would I be received “IRL”? The answer turned out to be: effortlessly. Not one person asked the obvious question – Steve, tell us about yourself! – for everyone knew me already. And this surprising ease wasn’t just about skipping formalities; I found we had genuine intimacy from years of sharing and caring, all on Twitter.
Weisberg quotes Joseph Reagle in "Reading the Comments..." looking for “intimate serendipity” in successful online communities. It seems both authors are overlooking how serendipity catalyses all human relationships. It’s always something random that turns acquaintances into friends. And happy accidents may be more frequent online, not in spite of all the noise but because of it. We all live for chance happenings, and the much-derided Fear Of Missing Out is not specific to kids nor the Internet. Down the generations, FOMO has always kept teenagers up past their bed time; but it’s also why we grown-ups outstay our welcome at dinner parties and hang out at dreary corporate banquets.
Weisberg considers Twitter’s decay into anarchy and despair to be inevitable, and he may be right, but is it simply for want of supervision? We know sudden social decay all too well; just think of the terribly real-life “Lord of the Flies”.
Sound moral bearings are set by good parents, good teachers, and – if we’re lucky – good peers. At this point in history, parents and teachers are famously less adept than their charges in the new social medium, but this will change. Digital decency will be better impressed on kids when all their important role models are online.
It takes a village to raise a child. The main problem today is that virtual villages are still at version 1.0.
A letter to the editor of The Saturday Paper, published Nov 15, 2014.
In his otherwise fresh and sympathetic “Web of abuse” (November 8-14), Martin McKenzie-Murray unfortunately concludes by focusing on the ability of victims of digital hate to “[rationally] assess their threat level”. More’s the point, symbolic violence is still violent. The threat of sexual assault by men against women is inherently terrifying and damaging, whether it is carried out or not. Any attenuation of the threat of rape dehumanises all of us.
There’s a terrible double standard among cyber-libertarians. When good things happen online – such as the Arab Spring, WikiLeaks, social networking and free education – they call the internet a transformative force for good. Yet they can play down digital hate crimes as “not real”, and disown their all-powerful internet as just another communications medium.
Stephen Wilson, Five Dock, NSW.
Most people who favour the current Australian flag hold that the small Union Jack up in the corner is an objective nod to our history. The tacit assumption is that the Southern Cross is the major motif of the flag, and that the Union Jack was added. Yet this is a mythic misunderstanding of how our flag was put together, and is itself emblematic of a continuing servility to Britain.
The construction of the Australian flag was precisely the other way round! Technically, our flag is what vexillologists (flag specialists) call a "defaced blue ensign" of the British Navy. There are several blue ensign designs, and all of them start with the Union Jack, before being “defaced”. This is why the proud flags of New Zealand, Fiji, Tuvalu and all six Australian states are basically the same.
So our proud Southern Cross is literally a second class embellishment on someone else’s flag. Where’s the national pride in that?
It's not just the image of the Union Jack on our flag that offends my republican sensibilities. Australia as a young nation was subordinated in the very way its flag was contrived.
The search for an up-to-date flag continues ...
Many traditionalists insist Test cricket is superior, and that Limited-Overs cricket -- One-Day and even worse, Twenty20 -- is for yobs. I count myself amongst those traditionalists but have long been a little uneasy that maybe I was just a being a snob. It's an odd shortcoming in cricket journalism: they lionise the five day Test format without really explaining why. But I thought about this over afresh the summer holidays (in front of the box, watching the enthralling Ashes Series) and I think I have an analytical defence of Test cricket in the face of the Limited-Overs fashion.
I reckon the key differences between Test Match cricket and the Readers Digest versions are that the latter removes most of the discretion and the decision making of captain and players alike.
Cricket is really all about managing scarce resources. In all forms of the game, you are stuck with a team of 11. They're mostly specialists: bowlers or batters, and you have a wicket keeper who had bloody well better be able to bat a bit. So before the game even starts, at the time of team selection, you have to strike the right the balance of personnel, and set a style for the team.
And then you take to the field -- a ridiculously large field -- and array just nine personnel (after the keeper and the bowler) to defend those wide open spaces. Do you attack with a close ring of catchers? Or defend with fielders out on the boundary? Do you set an aggressive off-side field, leaving a hundred metre gap between stumps and mid-on, and hope like hell your bowler doesn't stray towards leg?
But in Limited-Overs cricket, so many variables are taken away. Your choice of fielding configurations is hobbled by the arbitrary "circle" that exists only to excite big flamboyant hitting. The bowlers' tactics are thus inhibited. Worse, your bowlers can only bowl so many overs, so (a) they don't have time to work away on their foes and set their subtle traps, and (b) you need four or five of them all-up, and so all one day cricket teams basically look the same.
There is no discretion and hardly any decision making in the Limited-Overs format. In test cricket, the quandries multiply. How long will we bat for? How quickly should we bat? How much time do we need to leave ourselves to bowl the other team out? Do we declare? What's the weather going to do after tea? Or tomorrow?
In Test cricket, managing the bowlers is a never ending and always shifting challenge. Maybe your strike bowler is on a roll with three wickets, but they're tiring after 12 overs: how long do you persist? And at the other extreme, maybe you're faced with two batters relaxed and comfortable after a two hundred run stand. How are you going to shake them up? Who's your surprise change bowler to extract that rare wicket?
Limited-Overs cricket takes all the really interesting decision making away from the game. It's amazing when you think about it; traditional Test Match cricket necessitates decisions at time scales from 100 milliseconds (the time a batter has to read a ball coming at them) to several days.
But Twenty20 cricket reduces all players to robots. Every ball must be hit and hit hard; six or seven runs per over isn't good enough. The batters have no discretion; they're infantalised.
Some of the most exciting cricket I've ever seen has been about the ball-by-ball drama of batting at the end of a Test Match innings, whether it's to save a game or press for victory. A Test match batter has the most exquisite decision to make on each delivery: to defend or attack. Who could forget Border and Thommo's brave last stand against England in 1982-83? Or Ian Healy's match-winning knock when he beat South Africa with a glorious unexpected six? As I recall he faced dozens of deliveries, each one presenting him with the make-or-break choice.
All manner of sports are casually compared with chess, but surely Test cricket is the only sport that shares the same intensity of decision making and tough compromises when managing scarce resources?
Posted in Popular culture
"Generic verisimilitude" is a nice big word! It means the accepted visual language that conveys reality in genre movies. In other words, cinematic cliches.
I've always been bemused by the sideways figure-of-eight black frame that tells us when a movie character is looking through binoculars. Have movie makers ever actually used binoculars? You don't get the sideways "8", instead you just see a nice black circle. But it's not the worst example.
I saw "Rachel getting married" a year or two ago, and thought it was pretty good except for the madly excessive handycam wobble. And I got thinking about that and realised what a terrible artifice it is. Ironically, handicam wobble has become the leading sign of generic verisimilitude in 'gritty' moviemaking, yet the wobble is entirely fictional.
One of the marvels of the human brain is the way it produces a steady image as we move around. We can walk, run, jump up and down even on a trampoline, and our steadfast perception of the world is that it stands still. This complicated feat of cognition is thought to involve feedback mechanisms that allow the brain to compensate for the visual field shifting around on our retinas as the skull moves, sorting out which movements are apparent because we're moving, and which movements are really out there. It's a really vital survival tool; you couldn't chase down a gazelle on the savanna if your cognition was confused by your own mad dashing about.
So, if the world doesn't actually look to me like it shifts when I move, then what is the point of a film maker foisting this jerkiness upon us? If I was really in the place of the cinematographer, no matter how much I dance about, I wouldn't see any wobble.
Moreover, motion pictures are the most voyeuristic artform. The whole cinemagraphic conceit is that you couldn't possibly be in the same room as the people you're privileged to be spying on. So again, why the "realism" of the handicam wobble which is intended to make us feel like actually we're part of the action?
It's odd that in the face of suspension-of-disbelief, when the audience is already putty in their hands, filmmakers inject these falsehoods into the visual language of otherwise hyper-realistic movies.
UPDATED 10 Sep 2012
Another example. I was watching a mockumentary on TV, set in the present, featuring gen Yers, and the protagonists made a home movie. And when we see their movie, it is sepia-coloured and has vertical scratch lines. Now, when was the last time anyone used film and not digital video to make a home movie? I wonder what young people even make of this tricked-up home movie look?
Another example. NASA posts mosaic pictures from the Mars rover - like this one http://twitpic.com/at9ps7 - with the patchwork edges preserved, and where the colour matching is worse than what you can get with free panaorama software on a mobile phone these days. With all their image processing powers, why wouldn't NASA smooth out the component pics? Are they inviting us to imagine standing on Mars like a tourist with our own point-and-click camera?