I decided to read this after hearing the author on the Recode Media podcast and reading some of her shorter pieces in the Times over the years. There’s a lot to think about in Magic & Loss – I enjoyed the lucid language and often insightful commentary. The homage to the death of the telephone was wonderful, and the quick take on ‘science’ writing in the mainstream media was funny – but there were also a share of flimsy moments (did she really just try to summarize a billion photographs on Flickr by talking about the style of two users?) I found myself occasionally waiting for more substantive technical discussion (maybe I’m conditioned to expect it in any writing about the internet) but I guess the ‘internet as art’ premise doesn’t leave room for grubby engineering stuff. Unfortunately, the end of the book veered into esoteric academia. It’s impressive to see someone versed as equally in obscure Youtube clips as they are in Wittgenstein, but wrapping the book’s closing chapters in personal academic history (something about Tweeting to a physics professor?) left me feeling disconnected. I may eventually give this book another try, but next time I’ll go for the text (instead of the audiobook) so I can pause and follow up on the many arcane references.
“I fear we are the last of the daydreamers. I fear our children will lose lack, lose absence, and never comprehend its quiet, immeasurable value.” – The End of Absence
Many children this winter, especially in Boston, are having days off from school because of the weather. They’re being ‘absent.’ I used to love being ‘absent,’ on snow days. There was a peculiar isolation in it, a kind of detachment that’s almost impossible to reproduce now. This winter, those kids in Boston are having an entirely different ‘absence.’ They’re not absent in the way that I used to be absent.
The End of Absence by Michael Harris is another book about the internet and how modern technology is changing the human experience. I keep reading books like this. Most of them have a pessimistic take on what it all means, and the fact that I spend many evenings reading stuff like this is at least moderately contrary to the fact that I spend all my days getting paid to embrace it. That’s going to have to wait for another blog post.
So, is this particular work saying something of significance, that other books like ‘The Circle,’ ‘The Shallows,’ or ‘You are Not a Gadget‘ hasn’t said already? Maybe, maybe not. They’re all reminders that this isn’t a localized phenomenon – everybody’s feeling it.
The book starts with a summary of ‘kids these days,’ laments how no one reads anymore, and guesses that due to the changing nature of communication and availability, neuroplacticity will turn our brains to puddles. The internet has led us to a permanent state of ‘continuous partial attention’ and we should be adequately concerned. One dramatic statistic claims that if you’re over thirty, you’re probably having just as many electronic interactions as you are physical ones. This is particularly difficult, because if you’re over thirty, you’re also old enough to remember when this wasn’t even possible, and be bewildered at what things have become.
So, what are the products of ‘continuous partial attention?’ We’re confessing a lot of stuff, writes the author: “it often seems natural, now, to reach for a broadcasting tool when anything momentous wells up.” Why does that matter? Because it’s apparently made us all think we’re celebrities. The findings of a study of 3000 parents in Britain was cited:
“the top three job aspirations of children today are sportsman, pop star, and actor. Twenty-five years ago, the top three aspirations were teacher, banker, and doctor.”
The technology enables our banalities to become public performance, so public performers we (or our children) want to be.
In addition to our newly permanent residence in a virtual confessional booth, we’re also all experts now. The expression of public opinion is no longer filtered, edited, and perfected before presentation by trained editors. Some validations are in place to prevent complete falsities to spread in places like Wikipedia and Yelp, but those forums are just too big to moderate efficiently. Bullshit abounds. Bullshit is what happens when someone is forced to talk about something that they don’t know anything about, and it exists everywhere, now that everyone is encouraged to be an ‘expert’ and rewarded for their ‘competence’ by likes, comments, re-tweets, etc.
Bullshit proliferation leads into the next problem created by the ‘end of absence’ – Authenticity. The author makes an interesting point about how ‘young, moneyed people’ have made the ‘re-folking’ of life a priority – think Mumford & Sons. The IFC show Portlandia has been awkwardly successful at satirizing and celebrating this kind of ‘return to roots’ culture, where after decades of fast food, people now want to know what kind of farm their dinner was raised on; or in the midst of the digital technology era, ‘steam-punk’ advocates rebel and hold intensely serious seminars. The fetishization of the ‘authentic’ – record players and ‘old-fashioned’ moustache wax – is ‘the exception that proves the rule,’ according to the author.
Between all our confessing, expertise-sharing, and bullshit spewing, we hardly have the attention for anything else. In the chapter on ‘Attention,’ and its recent universal obliteration, the author documents his attempt to read ‘War & Peace’ with the tone of someone trying to swim to the moon. He eventually finishes reading the novel, but not without claiming that he’s alienated himself from everyone and everything he knows in the process.
A few more chapters about erosion of ability to memorize, and the ‘permanent bathhouse’ state of mind afflicting online romance-seekers, lead up to the book’s final act – the author attempts a temporary return to absence. His phone duct-taped to a table, internet connection severed, kooky old neighbors visited for coffee – he makes a valiant effort to go back in time, to when people could be ‘unavailable.’ No one ends up homeless or murdered, but the experiment reads dangerously close to the irrevocable shattering of domestic tranquility between the author and his partner.
Following the toe-dip experiment in returning to absence, the book’s final lesson is this:
“Just as Thoreau never pretended that cutting out society entirely was an option— and never, as a humane person, wanted to be entirely removed— we shouldn’t pretend that deleting the Internet, undoing the online universe, is an option for us. Why would we, after all, want to delete, undo, something that came from us? It bears repeating: Technology is neither good nor evil. The most we can say about it is this: It has come. Casting judgments on the technologies themselves is like casting judgment on a bowl of tapioca pudding. We can only judge, only really profit from judging, the decisions we each make in our interactions with those technologies.”
– The End of Absence
I can sense the ‘cycle’ of news media as this rotating blob, tucked just inside a massive doorway, and the moment one tries to step away from it, a persistent wind continues pushing it closer and closer.
It’s unavoidable – even consciously trying to decide that I’m not ready to jump back in after a break, I can’t go anywhere without incidentally grazing the ‘rotating media blob’. I visualize it like Slimer from Ghostbusters, or the big ancient space portal in Stargate – in the case of Slimer, you’re not going to outrun it – and in the case of Stargate, you’ve gotta step through, just because it’s there.
In the waiting room at the dentist’s office, CNN blares the sound of gunshots in Paris. At home, my dormant iPad pushes alerts of Academy Award nominations; newspapers collect at the front door, and restaurants everywhere are painted with televisions that shower everyone passing by with what’s ‘happening.’
On the last day of my recent vacation an article was sent to me describing the ‘In Case You Missed It’ (ICYMI) phenomenon and how it has become a kind of permanent purgatory for modern information consumers (anyone in the world with a phone or laptop.) (“The Unending Anxiety of an ICYMI World,” John Holcroft, NYT)
Looking at the concept from the distance of having spent seven days alternating between a black sand beach and the cloudy rainforest, I could relate to the feeling of an itch that I had ‘missed something,’ but I felt no urgency or responsibility to catch up. It seemed that ICYMI only matters if you’re already ‘locked in.’
Each time the media blob brushes against my sleeve as I try to pass by it, I know that once I fully submit to the cycle, I’ll be pulled back in, and things will start to ‘matter’ again. Far away political events, trivial details of the intellectual arts, and incomprehensible fractions of data concerning the world economy will all congeal and form an awkward, incalculable load balanced across my knuckles as they hunch over their keyboard habitat.
It felt wonderful for just a few days to leave that all behind, and to truly participate in the real world and people who are actually present – instead of expending mountains of energy trying to ‘catch up’ on everything I’m ‘missing.’
Enron: The Smartest Guys In The Room [Motion picture]. (2005).
This film nicely wrapped up what was a very complicated story. As the scandal took place, uncovering the important pieces of the narrative was difficult because they were competing with everything else in the daily news information deluge. Letting a few years pass and waiting for the dust to settle makes stepping back and taking the 10,000 foot view easier.
When Enron was originally in the news I didn’t have any sense of the great impact that company shareholders suffered, and I didn’t understand what a celebrated and ‘accomplished’ company it had been shortly before the scandal broke out. Being a high school student in 2001, I wasn’t as tuned in as I would be if something like this happened now, so I appreciate the deep-dive explanation of what really happened.
I am proud that investigative reporting was one of the catalysts for uncovering the story. At a time when many people think the media is suffering, news is biased, and technology has rendered newspapers and magazines obsolete – this scandal was ultimately uncovered because a lone reporter asked a few tough questions. A lone female reporter at that, who not was not only dealing with a very ‘macho’ culture at Enron, but also didn’t have very much career experience before breaking the story.
Energy is a form of technology, and that’s what Enron sold – but this didn’t feel like a story about technology. The company’s trouble mostly came from accounting practices, limiting the supply of their product and creating false demand to raise prices. This could happen with any industry that serves a basic need to the population – auto makers, communication services, airline industries, etc.
It raises the question (although it was seemingly answered for energy long ago) at what point does a technology, or a technology market, become regulable? There are probably actual definitions of this, but I’ve never thought about it before. Energy became regulable a long time ago, and now the internet is approaching that same threshold – but does it meet the same criteria?
How would this story be different if Enron were a broadband provider? I don’t know if they would have been held to the same ethical standards, since unlike energy, broadband hasn’t yet been defined as critical to the functioning of everyday life in American society.
The film presents the notion that Enron only attempted these unethical actions because they believed they were ‘smarter’ than the regulators, and that being smarter gave them permission to break the rules. Is this any different from the idea that ‘code is law,’ and whatever code someone is smart enough to come up with gives the coder permission to circumvent legality?
BBC – How Hackers Changed the World
While Anonymous was given most of the treatment in this piece, I didn’t feel like their story was the strongest thread in the overall hacker narrative. Lulzsec appears to be the group that did the most actual damage, while WikiLeaks is the most ethically challenging.
Anonymous comes across as a bunch of people posting on message boards who all showed up to protest the Church of Scientology once. Occasionally they were able to DDoS some government websites. They are presented as being large and formidable because allegedly ten thousand people participated in the Scientology demonstrations – but 10k isn’t that much, in the grand scheme. Ten thousand people shop at the Gap, ride the metro, buy a hot dog, blah blah every day. Boring and mundane stuff also attracts many people. Crowds don’t predicate meaning or importance.
Lulzsec on the other hand appeared to be able to cause more disruption with a much smaller and more focused group. Anonymous appeared to be too large to manage any kind of cohesive campaign, and the premise that it could operate without leadership doesn’t really have a historical precedent – anarchistic groups or societies have not accomplished much in the world, compared to groups with leaders and structure.
Discovery Channel – Secret History of Hacking
I think the order in which I watched these two documentaries made a difference in how I reacted to them. Watching the BBC doc first, and being exposed to Anonymous before Wozniak & Cap’n Crunch, had a different effect than if I would have reversed the order. I think I had less sympathy for Anonymous’s cause without the historical context of hackers that came before them.
The hackers of earlier decades, who began with ‘phone phreaking’ seemed to have a more innocent purpose. In the BBC doc, hearing the stories of Anon and Lulzsec and Wikileaks, I didn’t detect any of the harmless and curious aim that the original hackers had.
The earlier hackers also seemed to push the technology further. They subverted the intended use of the technology in a way that was unprecedented. Instead of disrupting or harming the way other people used it, they just found a way to use it more freely themselves. They were apolitical.
One interesting moment came when an early hacker spoke on the initial perception of computers – ‘why would anyone want one of these in their house?’ seemed to be the original sentiment, back in the 70’s when computers were large and not very powerful. It makes me wonder what technology is in its infancy today, that people are having the same reaction to. 3D printers? Self-driving cars? Google Glass? What else are people shunning for its perceived uselessness, that might someday dominate markets?
‘A man got to have a code.’ – Omar
As I wrote in a previous post, I just began a class in ethics and technology. During lecture last week, I couldn’t help but remembering the quote from Omar in The Wire on how everyone should have a code, or sense of morals – even if they don’t adhere to societal norms.
One of the ideas I’ve been most interested in, after two sessions with the class, is the concept of ‘discussion stoppers,’ and how they can be categorically expected to occur and also why they should be avoided.
I’ve never really enjoyed arguing for the sake of it. Many people get pleasure from the competition of proving their own righteousness or intelligence through ethical battles, and those people always turned me away from the activity. I prefer finding common ground in conversation, rather than exploring differences of opinion. In class, I’m finding out that to treat a subject which is ethically ambiguous requires a more concentrated effort than I’m predisposed to give.
‘Everyone has their own opinion, so there’s no point in trying to come up with a solution. It’s impossible.’ — this is a common perspective and one that I frequently give in to when a discussion becomes difficult. The textbook I’m reading suggests that it is incorrect to claim ethical progress can’t be made on account of the improbability of consensus. The fact that everyone can’t agree doesn’t mean that the discussion itself is useless, or doesn’t lead to minor advancements in understanding.
In the course of any typical week, I consume all kinds of news which touches on ethics. So, as part of the class, I’m starting to give more consideration to each scenario and what the ethical implications are, what claims were made to reach conclusions, and whether the claims appear to be sound.
Hitting my ‘ethical radar’ recently were several issues:
A police officer distracted by a laptop struck a man with his car, killing him, and the officer was acquitted because he was answering a ‘work-related’ email: Since when are emails or any other internet-based activity considered real-time communications of such a timely nature that drivers should be excused for killing a cyclist because they needed to respond to a laptop? The base claim here – that answering an email while driving was more important than a human life – seems unequivocally wrong.
Adderall and other ‘neuroenhancers’ being used in top colleges: Is it ok for students to do this? Why is it different from athletes being issued suspensions for using drugs? Those are the questions hinted at in this New Yorker article, which is more descriptive of the phenomenon than suggestive of any ethical standard. It does make more transparent the norms which predetermined the subject’s choices – such as legal decisions categorizing Adderall and other amphetamines as prescription-only drugs.
The Death of Adulthood: A lengthy and fascinating article in the NY Times by film critic A.O. Scott. The premise is that American literary culture has always been youthful & rebellious, but until now those sentiments had purpose against some specific enemy or authority. Scott claims that post-millennial culture has done away with adulthood, but without the ethical backbone of its predecessors.
Ray Rice and the video taping incident: Aside from the obvious conclusion that Rice’s actions were inexcusable, this story raised several questions about the ethics of surveillance. Was it ethical for the video owners to keep it private for so long after the incident? Does a person who makes a surveillance video have some kind of rights over it, or should they be obligated to immediately make it public? Since they are filming a public place, shouldn’t the video be ‘public,’ and viewable by anyone who is interested in that space? Why are videos filmed for surveillance kept more private than the places that they are filming?
Thoughts on Identity is the New Money, by David Birch. 126 p. London Publishing Partnership, May 2014.
Despite its provocative title, I didn’t finish this book with a precise understanding of how money will be replaced by identity; but along the way there were several interesting points regarding the advancement of mobile technology as a payment mechanism, and the implications for digital identity and privacy. The brief case studies indicate international efforts to make digital identities are further along than the USA’s, but no one is making great strides in adoption just yet.
I was left with questions about the book’s central idea, which is not a necessarily a bad thing when reacting to this kind of abstract premise. Was he saying I’ll be able to buy goods and services based on how many facebook friends I have? That the social graph alone will prove my ‘credit-worthiness’ and earn me whatever I need that I would otherwise have to pay for with dollars? How does being a part of the social graph actually increase, or enable wealth, from a technological perspective?
I think the book would have benefited from a different title, like ‘mobile phones are the new money.’ To me, the mobile examples were the most interesting futurist perspective offered, and the ones that made the most sense. Instead of cash, mobile phones should communicate without exchanging a great deal of identity information, only that I am Person A who has X number of dollars, and I would like to exchange them for a thing or service. No cash needs to change hands, or even exist, I suppose.
One of the most compelling ideas was that the economy can now support ‘infinite currencies.’ With physical money, we are limited by what we can carry – only one type. But with digital, it can be an infinite number, assuming an infrastructure is there to support it – not unlike the dozens of credit cards some people carry. So I could issue ‘Brian Dollars’ and you could carry them with your regular dollars, and when your phone initiated a transaction with me, I would tell it to use Brian Dollars only and it would comply.
The practical examples for this ability aren’t completely clear, but it seems like a logical idea. Maybe I will only give Brian Dollars to people who are nice, and you can exchange them for a cup of coffee. Or maybe my Apartment Manager will give me ‘apartment dollars’ when I pay my rent early, and I can exchange them for a ceiling fan. This kind of personalized exchange wouldn’t work with standard currency, since standard currency could be exchanged for anything – but with personalized currencies, the scope of transacting is easier to control.
The book’s historical references to ‘giant stones’ and ‘tax collecting sticks’ of centuries old illustrated that payment technology isn’t static. People haven’t been using credit cards or checks forever. Ancient systems were in place before what we have today, and therefore, what we have today will someday also be ancient and replaced by new things. A good way to get people on board with adopting new things is to point out what the old things were, and how much room there is for improvement.
Without a finance background, there were macro concepts behind the cash replacement idea that I didn’t really understand. My interpretation of the argument was that cash is expensive to produce and manage, and permissive of anonymous and potentially illicit transactions, therefore the financial system could be reformed and benefit from operating without cash. I agree with anonymity being undesirable, but I don’t think creating and managing a cashless technology infrastructure will be any simpler than maintaining a cash-based one, nor immune to hacks and corruption.
The most important argument I gathered from the book was that privacy is increased when digital identity is leveraged to facilitate physical, in-person payments (or ‘mundane payments’, as the author calls them.) Through cryptographic wizardry, my phone can prove that it is me, Brian, who is using it, and anyone who wants to interact with it can be sure they are interacting with me – and I can control what ‘parts’ of me, or which ‘identity’ they interact with and get access to. If I only want them to know I have 20 Jumbo Dollars, that’s all they get to know, but if they also need to know I live on Sesame Street, or that I am not a convict, they may ask for access to that information also, and it can be proven authoritatively via private key infrastructure, mobile phones, and identity management applications.
So, in short – this was a complicated but interesting take on the changing landscape of identity credentials, payments, and mobile technology. Maybe not all that fascinating or useful to people who work outside of the ‘Finance Tech’ industry, but perhaps these ideas will become more prevalent and widely understood over the next several decades as mass adoption grows.
I’m apologetically writing this well after it originally aired, but I’ve been watching Breaking Bad for the first time. (Spoilers will be small and few, out of respect for the uninitiated.)
Instead of offering my own full-bootlicking about how amazing the show actually is, I’ll simply quote from, and agree with, these words from the AV Club’s review of the episodes ‘ABQ’ and ‘Full Measures’ –
“…this show has been one of serialized drama’s greatest accomplishments. Television itself suddenly seems to have an expanded horizon of possibilities — for characterization, for juxtaposition, for thematic depth. Whatever happens from this hellish moment, the long descent to this point, with all its false dawns and sudden crashes, was singularly awe-inspiring, uniquely cathartic. People living through a golden age often don’t know it.”
“Extraordinary flowerings of art, technology, culture, or knowledge are obscured by intractable problems, crises, declines in other parts of the society… It’s easy to look at television, with its 500 channels worth of endless crappy versions of the same empty ideas, and conclude that everything’s gone to shit… Ironically, this pronouncement coincides with the greatest flowering of televised drama and comedy in the medium’s history.”
There are many qualities that make Breaking Bad an incredible viewing experience, the first of which is Bryan Cranston’s boundless performance in the lead role. His acting is the only reason I’m able to think of this show in such a realistic context, and analyze his character as if it were an actual person in the same world that I live in. I could offer unending praise on the acting, the brilliant camera work, dialogue, etc. But I just want to focus this post on one specific thing that’s caught my attention, as I set out to finish the series over the next few weeks.. (let’s be real.. Days.)
Walter White’s defining characteristic is arguably his squabble with identity – is he the murderous meth-cooking gangster boss Heisenberg? Or is he the doting father, soft husband, and nerdy brother-in-law? Is there room in a single fictitious character for both? (Yes.) Is there room in a real human being for both? (I believe so.) Maybe the show’s finale will answer some of these questions definitively, but I haven’t reached that point, so I’m still undecided on the matter. I’m willing to guess that there will still be plenty of room for interpretation on Walter’s moral character, even after the last episode’s credits roll.
The question of his identity seems so important because of other things happening in our culture right now. This is the age of facebook, where the privatest lives of the most everyday people are just as public as any royal. The first season of Breaking Bad aired in 2008, the heady days in which ‘social media’ became a phenomenon too large for anyone to ignore. The show continued playing out on the screen while in the audience’s living rooms, internet technologies connected the personal lives of everyone around the world at a breakneck pace – most intensely, the lives of comfortably wealthy Americans, especially those with an interest in the sciences or technology.
If the impetus for Walter’s entire journey is his need for money – in Season 1, funding cancer treatment was his reason for embarking on a criminal campaign – how could someone of his cultural demographics overlook the most money-making industry of this decade, the internet? When Breaking Bad premiered, and for years before, the American economy has been driven by the high value of software and computer technology. But Walter isn’t part of that America, somehow.
By all accounts, Walter White, caucasian middle-class scientist, teacher, and 2004 Pontiac Aztec driver, is the incarnate persona of a modern American internet user. If you knew a man with Walter’s pedigree, you would expect he spent his time off in some dorky enterprise like geo-caching, or beta testing Google Glass. His chemically-laced resume screams “Googler.”
But in which episode did we ever see Walter crack open a laptop? Somehow, all this fancy new ‘social’ technology has overlooked him. Instead of the positive social incubator it is intended to be, it only becomes an opportunity for further advancement into Walter’s fragile anonymity as a criminal.
The show doesn’t completely leave the internet out of its narrative – Walter Jr. raises money for his Dad’s cancer by setting up a donation website, Skylar does her research for money laundering on Wikipedia – but it rejects the idea, so often presented in today’s culture, that all of this online transparency is influential in a way that would prevent someone from taking fuller measures to hide their deviant intentions.
In the world of Breaking Bad, Walter is not persuaded by these popular new gadgets to connect in a positive way to his community, as much as Facebook would like to “make the world a more open place,” and Google would like everyone to follow its corporate motto, “Don’t be evil.” Silicon Valley’s utopian rhetoric falls limply on Walter/Heisenberg’s deaf hears.
I might be overly sensitive to this idea, working as I currently am for a company, ID.me, whose purpose is to enable an individual’s authority over their identity on the internet. In this field, as it exists now, all roads are converging on transparency. There is no accommodation for subversive duality, in the minds of those leading the development of digital identity. On Google,Facebook, ID.me, and anywhere else you want to be yourself online, you only get one persona, and it’s intended to comprise your whole self.
Popular opinion has recently treated privacy as debatable, far from an ‘inalienable right,’ and the public parade of social media is driving the idea further. The notion that governments and neighbors can snoop and sneak through a citizen’s life, online, is commonplace.
The narrative of Breaking Bad indirectly comments on the situation: it says Yes, a person may keep part of their life private… but they might be a drug kingpin. And with its morally circuitous characters, it also diffusely challenges the evolving concept of identity, by illustrating – No, the depths of a person probably cannot be summarized by a few photographs they post to their ‘wall.’
Half of my cognitive load on any given day is spent fighting the urge to read EVERY SINGLE ARTICLE on the internet. Fortunately, some make it through my productivity filter, and I allow myself to read them. Lately I’ve been using the very cool application Pocket to save things I want to read later.
Several pieces grabbed my attention this week. Each touched on the start-up culture in which I work, but I didn’t feel like the target audience – they all hinted direction at a reader on the outside of the tech world: Rolling Stone’s big interview with Bill Gates, the NY Times Magazine’s ‘Silicon Valley’s Youth Problem‘, and two from the Wall Street Journal – ‘Success Outside the Dress Code‘ and ‘Have Liberal Arts Degree, Will Code.’
Mr. Gates’ most interesting statements revealed his thoughts on morality, religion, and government, but he also answered questions about the current state of things – massive acquisitions of zero-revenue companies, and the possibility of living in a constant state of surveillance.
The Times article was engrossing, chronicling the division between youth-driven startup culture and the legacy of elder-generation technologists (like Zuckerburg / Gates.) Is it just coincidence that Gates gave an interview to the youth-focused Rolling Stone at the same time as the Times publishes a manifesto on the generational disconnect?
The two WSJ articles also share the ‘young tech’ theme – ‘Success Outside the Dress Code’ investigates the results of a study on how dressing casually in formal settings can influence opinion (a practice, common among young software developers, which I am happy to rant about) – and the other, ‘Have Liberal Arts Degree, Will Code’ about how young graduates of all departments are abandoning the academic disciplines they studied in favor of higher-paying software industry positions (as an English major working with a Ruby on Rails development team, this one really hit home)
So what catalyzed this deluge of similarly focused articles? ‘Big Media’ writes about technology often, but something about the tone of this writing seems different – Bill Gates waxing poetic on billion dollar acquisitions and world-saving to the pot-smoking readership of Rolling Stone, the NY Times writer (a young Silicon Valley alumn) broadcasting her concern over whether she should work for a hot young startup like Uber or a crusty old-guard firm like Cisco, and the Wall Street Journal exploring the incongruities of tech culture – how its citizens dress eccentrically and give up their educational idealism in favor of cold, hard cash.
Of the articles, Yirin Lu’s writing in the Times magazine stands out the most. Her personal anecdotes as an intern in Silicon Valley bind well to the concrete examples of age division she describes. She rejects the presumption that older companies are home to “subpar, less technically proficient” employees – she cites the number of patents owned by Cisco as evidence to the contrary. Yet, as the WSJ article describes, tech companies are trying to grab as many young engineers as they can – some going as far as offering signing bonuses to dissuade potential hires from finishing college. If only Mr. Gates had fielded a related question in his interview, he surely could have added a valuable argument to the debate.
The Wall Street Journal pieces are brief, neither explores their territory with the critical and sharp eye that Lu focuses on her topic. But each shares the provocative attitude that a certain kind of delirium resides in Silicon Valley’s money soaked culture. In Gates’s interview, he states: “When you have a lot of money, it allows you to go down a lot of dead ends.”
It’s hard to pin down exactly what statement these articles are all trying to make, if any. What I’m most curious about is how deeply these discussions will resonate with their audience, or if they are only this week’s flavor of capricious media interests. Perhaps the journalists’ unstated intent in their recent scrutinization of the modern technocracy is to map those “dead ends” out before too many people (without Mr. Gates’ resources) get stuck moving toward them.