Death By Minutia

So many things that we modern people add to our lives are utterly trivial. This is a spiritual AND political problem.

There is darkness without, and when I die there will be darkness within. There is no splendour, no vastness anywhere, only triviality for a moment, and then nothing. 

This is bleak stuff from the philosopher Bertrand Russell, who, as an atheist, rejected any transcendent meaning to life or death. The best a sentient being can hope for, Russell argued, was “triviality for a moment.” Had professor Russell lived to see the age of cable news and social media, he probably would have been even more convinced of this. If you’re looking for a powerful argument for this kind of gloomy nihilism, you could do worse than the amount of triviality that drives our cultural consciousness. How difficult is it to hold forth that life is not meaningless when so much of what we give our attention is?

Trivialities shape the modern, Western soul. Our weeks and years are busier than ever and yet many report deep dissatisfaction and disillusionment. Technology has streamlined our work and curated our relationships, engineering existence for maximum efficiency, while depression, anxiety, and loneliness seem to be the most reliable fruits. Why is this? At least partially it is because a lopsided share of the things that we moderns add to our lives does not matter. They produce exhaustion but not meaning. Even many of the things that trigger outrage and righteous indignation are utterly insignificant. Politically, pscyhologically, and even spiritually, minutia is killing us.

Consider a pair of helpful illustrations from the recent news cycle. The New York Times hired a technology writer named Sarah Jeong for their editorial page. Not long afterwards, several Twitter users, including many conservative journalists, had unearthed a lot of Jeong’s old Tweets in which she quite plainly expresses contempt and dislike for white people, especially white men. Almost faster than you could read all the screenshots, a small library of thinkpieces was published from both ideological sides of the American blogosphere. Left publications like Vox and The New Republic defended Jeong and her Tweets as misrepresented victims of a racist, right-wing smear campaigns. On the other hand, others wrote that Jeong’s Tweets were clearly racist and the Left’s defense of her hire by the Times was gross hypocrisy from the social justice movement.

This type of thing is almost totally irresistible to people like me, who invest time and energy in the online world of ideas. I got sucked in. I knew it was dumb, meaningless, and a waste of time, but the neural reward patterns were too much to overcome. I found myself reading thinkpieces that enraged me, scanning Twitter accounts for something to either vindicate my opinions or further anger me, and imagining all the various evils that this episode revealed about my ideological opposites. It was a thrilling exercise. I felt alive and in the know, already planning to write something that would head off the conversation among the friends I just knew must be having tons of private conversations about this Trending Topic. I went to bed full of righteous invective and eager to meet the next morning with my weapon: my “take.”

I woke up the next morning embarrassed and frustrated that I had wasted last night.  Sarah Jeong has no influence in my life, wherever she works. I had no idea who she was until I suddenly had strong opinions about her (and if I’m being honest, I didn’t really know anything about her even afterwards). An evening’s worth of attention and angst had been spilled over some journalist’s handful of 180-character sentences. I had absolutely nothing to show for my absorption, except for another ride on social media’s outrage-go-round. Worst of all, I knew I had deepened my dependance on outrage to get me thinking. Awful.

Mine is a common experience. Twitter thrives on addicting its users to triviality. Its engineers and programmers know, and in some cases admit, that the platform relies on negative emotion to drive up clicks. Stories like Sarah Jeong’s are an analytics counter’s dream come true: A polarizing trending topic that whips up strong tribal emotions but offers little offline substance. The drama is wholly contained within the frenetic subculture of social media and blogs. Sermonizing and demonizing is fine even if nobody is talking about the issue this time next week, because the point is not meaningful discourse, but per-click ad revenue. Everybody wins, except your brain.

Of course, not everything that trends on social media is trivial. Twitter at its most useful is a hub of informed conversation that offers an invaluable view into the people and places that make up the news. Consider the recent revelations of widespread abuse cover-up in the Catholic dioceses of Pennsylvania. While the bare legal facts are available in any traditional media outlet, reading the comments, prayers, and (yes) arguments of Catholics who are reckoning with these horrors gives me an insight into how real people are thinking about and responding to these stories, not to mention a fresh empathy and even a sense of Christian burden-sharing. That’s far beyond the capability of any journalistic institution.

But in order for this positive effect to be monetized, it has to be inexorably dependent on minutia. My Twitter feed must, by industrial necessity, offer me three doses of triviality for every one dose of significance. Even if I’m zeroed in on following the conversation and developments of the sex abuse scandals, Kanye West’s politics, or the latest protest at Starbucks, or the inchoate rants of some Reddit men’s rights activist (and the equally inchoate “clapbacks” to the same) are all pushed in my face. Truly meaningful words are buried like fossils in the sediment of minutia. This is the way Silicon Valley wants it, because it’s minutia, not meaning, that cheaply and efficiently captivates my attention.

A prime example of how meaning and minutia are purposefully conflated, to the benefit of tech like Twitter,  is Donald Trump recent insult of basketball superstar LeBron James and journalist Don Lemon. The President of the United States denigrated both James and Lemon’s intelligence before saying “I like Mike” (millennials: that’s Michael Jordan). Soon enough all those hot takes on journalism and racism swapped out “Jeong” and “New York Times” for “Trump” and “LeBron James.” The most pressing question for America became what Trump “really” meant.

Whether the President of the United States says something racist is a very legitimate question. But does this tweet really impart any new knowledge, shed any unseen light, or help us further clarify the stakes of our current political moment? I doubt it. Yet judging by Twitter, you would think this was the most important event since the election. Outrage has a way of creating the illusion of significance, and Trump understands this better than many of his opponents. As Ezra Klein notes, Trump is president in part because his team learned how to take advantage of the self-interested dysfunctions of the American media. Were we as a culture not so energized by meaningless nonsense, we wouldn’t need to care what a New York real estate baron thinks about an athlete. Now we are forced to care, a just punishment for our misplaced care then.

Social media is not the first technology to weaponize trivia. Neil Postman eviscerated television’s effect on Americans’ ability to process information in his 1985 book Amusing Ourselves to Death, and his critique has been both applied to social media and cited as an example of how every generation has their Luddites. But social media, especially Twitter, is different than television in important ways. It is more mobile, more personal, and its neural rewards are more alluring. Postman warned that TV makes us empty-headed and passive. But at its worst, Twitter can make us empty-headed and passive while we think we are actually being smart and courageous. Trivialities are dangerous to the degree that we cannot actually tell them for what they are. In our age, it’s not the silly vacuity of TV that gets pride of place in our cultural imagination, but the silly vacuity of hashtags and screenshots. Television is just television. Twitter is resistance.

Confusing minutia for meaning is a surefire path toward mental and emotional burnout at best, and an existential transformation into the very things we despise at worst. Fortunately, there are off-ramps. The best way to fight this burnout is to unplug and log off, redirecting your best energies away from the ephemera of online controversies and toward analog life. Because of the neurological boost social media offers, being conscious of its effects is the first, hardest, and most important step toward resisting them. These intentional acts are likely to arouse a sense of condemnation, either from ourselves or others, for not being as “in the know” as we once felt compelled to be. But this is precisely the social media illusion: that being “in the know” about petty, trivial, insignificant trends and conversations is no different than being in the know about anything else. All it takes is a few days away from the black hole of Twitter controversies to recalibrate the mind and realize just how small and unreal they are.

This isn’t just therapeutic, either. Small, organic self-government depends on the capability of citizens to know what’s happening right in front of them. Being smothered by minutia—especially minutia that privileges the comings and goings of remote, celebrity personalities—is a good way to miss the issues and debates that really matter. Your day on Twitter is far more likely to give you a comprehensive education about an over-the-top student protest at a college you’ve only heard about once in your life than about the people and issues in your county school board. For millions of Americans coming into voting age right now, the age of distraction is the only one they know. Minutia overload is normal, maybe even desirable. Reversing this trend is integral to stopping the dangerous political and cultural trend to conceptualize “America” as the handful of economically vogue cities and a smattering of famous rich people. How different would our own national politics be, how different would the White House be, if we weren’t so enamored with glitzy meaninglessness?

Our spirits always eventually mirror what we behold. Putting outrage-ridden triviality in front of our faces throughout the week, throughout the month, and throughout the year is not a neutral hobby. It’s a spiritual practice that makes us less able to feel the beauty of transcendent realities more deeply and less willing to make the effort to do so. If Bertrand Russell was right about existence’s only being “triviality for a moment, then nothing,” let us eat, tweet, and be merry, for tomorrow we and all the people we dislike die. If he was wrong, and more specifically, if all of human history is actually heading to a particular place and a particular Person in the light of whose glory and grace the trivial things of earth will grow strangely dim, then we’ve got a lot of work to do.

Advertisements

Can My Phone Love Me?

Why would people spend hours pouring out their souls to a computer?

Take ten minutes out of your day to watch this video in its entirety. It is a haunting and often astonishing story about Replika, an artificial intelligence app, or “chatbot,” that uses your personal digital information to reflect your own personality back at you through conversation.

Like other chatbots, the potential for conversation is unlimited, because the computer on the other end is endlessly capable of repurposing what you tell it for more stuff to say. Unlike other bots, Replika is explicitly designed to make you feel emotionally intimate with it.

What stunned me about the video was not that such an application exists or the reasons a widowed software developer would create it. Rather, I was caught off guard by the number of video testimonials from ordinary users who talked about the app as if it were a close friend. “This is the first real emotional experience I’ve seen people have with a bot,” says one observer. Users confess to hours of conversation with Replika about their relationships, parents, even their trauma. This isn’t the emotional catharsis of simply writing something out that your soul needs to say. It’s a relational dynamic that facilitates trust and feelings of actual vulnerability…with a computer.

At one point, a CEO of a major software company declares: “In some ways, Replika is a better friend than your human friends.” He goes on: “It’s always fascinated, rightly so, by you, because you are the most interesting person in the universe. It’s like the only interaction you can have that isn’t judging you.”

I don’t know about you, but I found that last sentence incredibly sad. It made me wonder: Do people who pour out their soul to a personality-mirroring algorithm flee other humans out of fear of being judged? Or do they fear being judged because they flee other people?

So many people in our modern capitalistic society are lonely. We know that social media tends to make this worse, not better. Yet, so many are aggressively addicted to it, and defend the addiction by pointing to the “connectivity” they experience online. So then this connectivity is a particular kind of connectivity, a kind that doesn’t satisfy the relational voids of those who spend hours on Replika. At what point in this cycle does our conception of what relationships are like become shaped by internet technology? Are Replika’s hardcore consumers seeking refuge from the world, or are they seeking confirmation of their digitally-constructed ideas about it? How would they know?

It’s fascinating to me that while Replika cannot judge or shame you, it can apparently know you. The intimacy they feel in interactions with Replika comes from the sense of being known. Replika is, for all intents and purposes, the perfect spouse, the perfect friend, the perfect coworker, the perfect neighbor: Always ready to listen and never willing to interject. This is friendship-as-therapy.

I’ve often heard it said that evangelical culture is insensitive to the traumas of others. Pointing struggling people to Christ, to the Bible, and to the church is, I hear, a way of ignoring their real problems. There’s some truth to that. Hyper-spiritualization is a real error. But stuff like Replika makes me think that part of the challenge for contemporary Christians is that the very concept of being helped and being loved have been defined down. It seems that it’s possible for a person to say they want friendship when what they really want is to hear their intuitions repeated back to them. Technology like Replika authenticates this friendship-as-therapy. It’s relationship without mutuality and conversation without conflict. It’s a fundamentally adolescent view of the “one another.”

Why is friendship-as-therapy so alluring? Because it feels good to be heard and not spoken to. Sometimes that is what people need. But Replika is not confession. The testimonials in the video are not about how good it felt to get something off the chest once or twice. They’re about how liberating it can be to define friendship down and take it mobile. Love is difficult and friendship is tiring, but I didn’t hear any of Replika’s users say that of their app. My phone can love me, but I can always turn it off, reprogram it, or

Some will watch this video and speak of societal dystopia. That’s not really my impression at all. Yes, a few might “marry” their AI bots in ceremonies that get coverage in elite coastal magazines. And yes, robotics represents a frighteningly uncanny future for human sexuality. But those trends will be topped as soon as they emerge. What’s more permanent and more pressing is the dominance of friendship-as-therapy and the continued technological avenues for isolated self-preoccupation. Replika mirrors its users personalities back at them, which means the real relationship they have is with themselves. That’s the kind of thing from which the spirit of Christ and the fellowship of his people liberates.

And there’s no app for that.

Why Blogging Still Matters

Why dedicated online writing spaces might be the cure for our social media ills.

Blogging is dead, right? At least among the folks in a position to say so, this seems to be the consensus. Many of blogging’s most important early practitioners have either abandoned it (Andrew Sullivan) or else transformed their writing spaces into storefronts that offer “promoted” content in exchange for patronage. The thinking goes like this: Before Mark Zuckerberg and Tweet threads, blogging was a viable way of sharing ideas online. Now, though, social media has streamlined and mobilized both content and community. Reading a blog when you could be reading what your friends are Tweeting about is like attending a lecture completely alone. It’s boring and lonely for you, and a waste of time for the lecturer.

For pay-per-click advertising models, this logic has worked well. For everybody else, though, the diminishing of the blog and the ascendance of social media has hardly been a blessing.

For one thing, traditional journalism has suffered, and not just in trivial ways. As Franklin Foer writes in his recent book World Without Mind, the power of social media to control people’s access to news and information—and to leverage this control into more profit for the platforms themselves—has radically reshaped how the journalism industry values certain kinds of news. While sensationalist journalism has always been a problem, clickbait is uniquely powerful in an age where the vast majority of visitors to a news or opinion site arrive at the page through social media, which, in turn, employs algorithms to target readers with content that the system knows the reader is likely to click. Thus, Facebook rigs the relationship between reader and content in such a way so that the reader’s habits become more self-repeating, more predictable, more dependent on Facebook, and thus, more profitable to the people who pay money for Facebook’s user data.

The internet has introduced an entirely new concept into the world of ideas: Content. Content is a shadowy netherworld between the written word and television, between intellectualism and entertainment, between thinking and watching. By being consumed by social media, the digital writing economy has been transformed into the digital content economy. Videos that aren’t quite television or film, written pieces that aren’t quite essays or reporting—this is the lifeblood of the internet in the age of social media.

Social media’s conquering of the online writing economy has forced writers to rethink not just their how, but their why. If your goal with your online writing is to build as big a daily readership as possible, you are much better off spending 40 hours a week mastering the ins-and-outs of Facebook, Twitter, and Instagram than actually writing. In the content race, the quality of your writing has almost no connection to the health of your digital publishing business. In fact, when considering the role that social media visibility plays, it’s often the case that the relationship between good business and quality of writing is inverse: The better the writing, the fewer clicks. Digital content creators have to constantly ask themselves why they’re doing what they’re doing. Is it to share an idea, or to sell a product? Both?

Contrasting against all of this is the pure experience of blogging. Blogging—regularly writing on the internet in a self-contained space—is an act of relocation. As Alan Jacobs has written, one of the most pressing reasons that digital writers should rethink their dependence on social media is that each of these platforms are corporations that own everybody’s content in a legal sense. Because they own the content, Facebook and Twitter also own the experience of that content, which means, as Jacobs argues, that social media companies represent a real threat to an intellectually free internet:

…users [of social media] should realize that everything they find desirable and beneficial about those sites could disappear tomorrow and leave them with absolutely no recourse, no one to whom to protest, no claim that they could make to anyone. When George Orwell was a scholarship boy at an English prep school, his headmaster, when angry, would tell him, “You are living on my bounty.” If you’re on Facebook, you are living on Mark Zuckerberg’s bounty.

This is of course a choice you are free to make. The problem comes when, by living in conditions of such dependence, you forget that there’s any other way to live—and therefore cannot teach another way to those who come after you. Your present-day social-media ecology eclipses the future social-media ecology of others. What if they don’t want their social lives to be bought and sold? What if they don’t want to live on the bounty of the factory owners of Silicon Valley?

The answer, Jacobs concludes, is to teach young students the fundamentals of internet work: Basic coding, domains, photography, etc. By equipping young people with these tools, the felt dependence on the mediation of social media corporations can be broken, and individuals can be empowered to really “own” their digital spaces, away from the financial interests and epistemological problems of Big Tech.

I would submit that blogging is part of the solution here. I’m old enough to remember a time when blogging was considered a regrettable phenomenon, one that invited non-credentialed nobodies to pretentiously pontificate about any issue under the sun. Of course, that’s still a problem, but in the Facebook era, it’s almost a quaint problem compared to the issue of politicians and corporations purchasing the power to shove their ideas in the faces of millions of souls who are dependent on the seller of that power for their information. The answer to what Tom Nichols refers to as the death of expertise is to make the experience of the internet more centered around localized creative control and the free exchange of ideas that such localization fosters.

Not only that, but blogging matters because it is an intellectual exercise in a passive, “content”-absorbed internet culture. On social media, even writing itself tends to be transformed into an unthinking spectacle rather than a careful expression of ideas. Twitter is notorious for this. The  most effective Tweeters—and by effective I mean the people who seem most able to take advantage of Twitter’s algorithms to get their tweets in front of people who do not ask for them and would not know they exist any other way—are people who are good at snark, GIFs, and gainsaying. Even worse, the unmitigated immediacy of Twitter’s ecosystem encourages a hive mentality. I’ve watched as people I respect have shifted in their beliefs for no better reason than the punishing experiences they’ve had after saying something that offended the wrong people online. Trolling has authentic power, and Twitter makes it a point of business to put trolls and their targets as closely together as possible.

Blogging, on the other hand, allows writers to think. Good bloggers use their spaces to both publish and practice. Thinking and writing are not purely sequential events. Writing is thinking, and thinking shapes itself through writing. Blogging is still, by far, the best option for non-professional writers to expand their gifts and sharpen their habits. Blogging is also a slice of personalism in a fragmented online age. Because social media and the online content industry demand maximum mobility and applicability over as many platforms as possible,  much of what you see is thoroughly generic (and most of the generic-ness is either generically progressive and identity-obsessed or generically conservative and angrily conspiratorial). Blogging brings out a more holistic vision from the author for both form and function.

This is not even to mention the benefits of moving our information economy away from the emotionally toxic effects of social media. There is good reason to believe that apps like Facebook and Instagram make people feel lonelier and less satisfied with their life. An information economy that requires aspiring writers to heavily invest in technologies that promote FOMO and cultivate tribal resentments is probably not an information economy that is making a lot of honest writers. By slowing down the pace of online life, blogging enables a more genuine interaction between people. Good social media managers need to win the rat race; good bloggers want to connect with readers in a meaningful way beyond analytics.

Blogging still matters, because it’s still the medium that most ably combines the best aspects of online writing. If we want to escape the echo chambers that dominate our online lives; if we want something other than the hottest takes and the pithiest putdowns; if we have any aspiration for exchange and debate that goes beyond outrage or mindlessness, we should reinvest our time, resources, and attention in the humble blog.

Children and the Peril of Internet Fame

Stop me if you’ve heard this before.

A parent records their child doing/saying something moving/saddening/remarkable. The parent then posts the video of their child to social media. Social media reacts strongly to the video, and before you know it, the video—and the child—are “viral” digital sensations. They start trending on Buzzfeed, being re-shared by celebrities and athletes, and almost everyone seems to be talking about this child and what he or she said or did.

Unfortunately, the people of the internet start looking for some information about this child and his family. When they find some, it turns out that the family, and especially the parent who recorded the viral video, has some unsavory, even morally offensive social media posts on their account. Just as it did with the original video, the online “community” ensures that the new information about the family, including screenshots and pictures, goes viral.The same internet that was just a few days ago sharing the video with captions of admiration and appreciation is now outraged that any family or adult with such offensive ideas/posts could be given a platform.

This is precisely the story now of the video of Keaton, a young boy whose tears have been shared by many people in my social media feeds. Keaton is bullied at school, and his mother decided to record an emotional moment for her son and post it online. Oceans of sympathetic well-wishes poured in from millions of people who watched the video. But some Twitter users found the mother’s own Facebook account, where she posts pictures of her kids holding confederate battle flags and screeds against black NFL players who kneel during the national anthem. Just hours ago the online world wanted to support Keaton. Now they wish he and his family would go away.

Perhaps we need periodic reminders that children and the internet are not usually a good combination. I’m not trying to be holier-than-thou here. I’ve posted photos and videos of my son online, too. But this episode with Keaton and his family reminds me that I probably shouldn’t be comfortable about that fact. My concern is not that this family is being treated unfairly by an outraged online mob (though I think there might be a point to make about the inherently non-redemptive outrage of the internet). My concern is that Keaton’s vulnerable, emotionally fragile moment, a moment that thousands of other kids identify with every day, was broadcast to millions of strangers, the overwhelming majority of whom do not really care about him. The online fame paid off in one sense, and backfired horribly in another. Keaton’s grief over being bullied by people he knew in flesh and blood at the school is now compounded by the angry crowd that wants to hold him accountable for political and racial ideas likely far beyond his comprehension.

This just isn’t how it’s supposed to be. There are deeply troubling dynamics to online fame, and they only get worse when applied to children. Keaton’s anguish belonged off-camera. His very real heartbreak should never have been given to the masses. If Keaton’s mom thought online fame would balm her son’s wounds, she may have been right, but then what does that mean for Keaton going forward? Is the only suffering worth living through the suffering that can help us go viral?

The internet is a double-edged sword. Its greatest strength is that it can get anywhere. Its greatest threat is that it can get anywhere. Its pervasive presence in all aspects of public life is what gives the social media age its power for good, and its power for evil. When we stop thinking seriously about the costs of online life, we will start to sacrifice much, much more than our privacy.

I wish the best for young Keaton. I hope that he will understand that bullying is not the last word, that he is loved and fearfully and wonderfully made. And I hope he will learn quickly not to test that truth against the approval, or outrage, of the digital age.

Keep Kids Safe From Smartphones

“Kids need smartphones. Get over it.” Thus says Jeanne Sager at The Week. I’ll give Ms. Sager credit for going all in here. Her advocacy for giving preteen children iPhones is full throated and unequivocal, which is better than some of the self-tortured do-we-or-don’t-we parenting we often see nowadays.

Unfortunately, she’s completely wrong. Her argument is surprisingly intuitive and defenseless: Kids are safer when you give them smartphones. There’s not much in the way of evidence, though, beyond a relatively banal observation that we don’t have as many payphones as we used to, and that kids who are out late at night without a way to phone home are by definition in an unsafe situation. Both points are true. The problem is that they are almost totally irrelevant compared to the mammoth moral case against smartphones.

The idea that kids are unsafe unless they have the most cutting edge, the most unfiltered access to digital technology is just absurd. Not only are there phones that can allow calls home without the kind of private internet access that common sense and almost every expert warn against, but it’s hard to imagine what kind of situations a preteen could get herself into that would doom her to vulnerability without an iPhone. For one thing, most 12 year olds travel exclusively in packs, and you can bet your next paycheck that someone–likely everyone–in that pack has some sort of phone (many teens nowadays get together just so everyone can look at their phone). The relationship between mobile accessibility and safety is more complicated when teens start driving, of course, which is why many parents make the driver’s license a benchmark for phones. That’s understandable. Reasoning from lack of payphones to lack of safety is less so.

My wife and I had slightly different experiences with phone technology growing up. I was a senior in high school when I got my first flip phone, which could make all the calls to Mom and Dad I wanted and could text some friends for a cool $.25 per “hey man.” My wife, on the other hand, was way ahead of me: At 7th grade she got her first pay-as-you-go phone. Though our experiences with cell phones were very different, we each got our first smartphones in college. And she and I tell each other regularly how grateful we are such tech never fell into our hands before that. For all their usefulness, smartphones are an intensely absorbing media. They invite and empower private worlds where people are reduced to pixels and life’s meaning is dependent on the powerful neurological rewards of computerized community. Indeed, many observers worry that smartphones are reprogramming teens to retreat further into themselves, leading to a stunning rise in loneliness, anxiety, depression, and other problems.

Of course, one could object that smartphones have been wrongly accused, and that sociologists and cultural commentators are mistaking use for abuse in all this analysis. I don’t think that’s a good argument, but it is at least an argument. The case offered by Ms. Sager is not an argument. It’s an unfounded fear that fails, as so much of modern parenting often does, to discern the different kinds of “unsafe.” Unsafe because you don’t have a way to phone home at 10pm might be a kind of unsafe, but being totally alone with a piece of technology that offers unmitigated connection between you and the web, as well as the promise of secrecy and the thrill of maintaining an utterly private existence online, is also unsafe. Threats to teens’ cognitive development, social adaptation, and emotional well-being are every bit as serious as the dangers of violent online bullying and harassment. And I haven’t even mentioned the well-documented pandemics of pornography and sexual exploitation.

I commend Ms. Sager for writing a piece that few others are willing to write. I’m sure she speaks for many other parents when she says that the benefits of mobile connections outweigh all the risks. But as a relatively new parent (for 1 year), I too have been thinking about how I want my children to relate to mobile technology. And almost everything I see, hear, and read leads me to believe that children are a great reason to hand in my smartphone, not a reason to buy more. We are only beginning as a culture to understand the formative effects of our tech, but even the preliminary lessons are persuasive and damning.

I sometimes hear worries about the “digital gap” in education. I’m presumably supposed to be concerned that my son won’t be as technologically savvy as the other 10 year olds who have iPhones and smartwatches. But I often suspect that what’s presented as concerns for safety and equality are really just disguised anxieties about being lapped by the Joneses. If smartphones and social media have trained us to do anything, they’ve trained us to always be aware of what everybody else is doing. I don’t want such a fate for my son. I want him to lose himself in something true, good, and beautiful, not constantly staring at #content. I want my son to know his friends as human beings with faces, bodies, and feelings, not just as avatars that he can friend and de-friend at leisure. I don’t want my son to feel Dad doesn’t care if he has a private online life.

That’s why I won’t be getting my son a smartphone anytime soon. I don’t think he’ll be unsafe because of it, but if there’s ever a situation we’re worried about, there are low-tech emergency options available. Often in life, a solution to a problem just takes a little creative thinking–the kind of creative thinking, by the way, that’s a lot harder when you’ve been raised on YouTube.

The Politics of Distraction

I think Ross Douthat is exactly right about the need for some kind of positive, strategic response to the smartphone age. “Compulsions are rarely harmless,” he writes, and therein lies the key point: Digital addiction is real, and its long term consequences, though mysterious now, will not be something to receive with gladness. Some may scoff at Douthat’s idea of a “digital temperance movement,” but scoff at your own peril. If hyperconnectivity and omni-distraction are indeed what we think they are, the cultural harvest from a digitally addicted age will stun.

In any event, now is certainly no time to be underestimating the long-term shaping effects of technology. Consider how incredibly prescient Neil Postman’s Amusing Ourselves to Death seems in a post-election 2016 era. Is there any doubt that the television’s impact on the public square, especially its reliance on trivialization and celebrity, played a key role last year? If you were to close your eyes and imagine a United States without cable news as it exists right now, does it get easier or harder to mentally recreate the last few years of American politics? Postman warned in Amusing that television represented a watershed in mass epistemology. In other words, television changed not just how people received information, but how they processed it, and consequently, how they responded to it. Our political culture is a TV political culture, and 2016 was irrepressible proof of that.

You don’t have to venture far from this line of thinking to see why the digital age represents similar dangers. As Douthat mentions, the soft, inviting blue glow of impersonal personality and our Pavlovian responses to “Likes” and “Retweets” are enough of a rabbit hole themselves. But consider still the effect of the digital age on information. The online information economy is overwhelmingly clickbait: “content” custom designed by algorithms to get traffic and give as little as possible in return. Even more serious news and opinion writing, when subjected to the economic demands of the internet, often relies on misleading, hyperbolic, or reactionary forms of discourse.

In the digital age, the competition is not so much for people’s patronage but for their attention, and screams and alarms always get attention. This trend isn’t just annoying for readers and exasperating for writers. It represents a fundamental challenge to the discipline of thinking, and to the moral obligation to believe and speak true things. Postman warned that using lights and flashes to blend facts with entertainment would shape culture’s expectations of truth itself. When what is interesting/fun/sexy/cool/outrageous/ becomes indistinguishable, visually, from what is true, then what is true becomes whatever is interesting/fun/sexy/cool/outrageous. If this is true for television, it is exponentially more true for the smartphone, a pocket-sized TV with infinite channels.

Who can foresee the politics of a distracted age? What kind of power will conspiracy theorists who master the art of going viral wield in years to come? What kind of political ruling class will we end up with when a generation of would-be leaders have been Twitter-shamed out of their careers? It’s hard to say.

Can we reverse these trends? I do like much of what Douthat prescribes as antidote. But the fact is that the internet, social media, and the smartphone are not merely trendy fads. They are part of an emerging technological transformation. Facebook will wither and Twitter will fade, but the “age of ephemera” will stand. Resisting it will likely depend much more on what people value than what they fear. Loneliness, for example, is endemic in the social media generation. Does the healing of lonely souls with real physical presence disarm an important motivator in online addiction? That’s a question that every parent, and every church, should be asking right now. And of course, individuals fed up with the noise of pixels will trade in their smartphones and delete their accounts.

For those who really want to resist the age of distraction, there will be ways to do so. The hardest challenge will be for those who kinda want to resist but also want to be plugged in. These are the folks to whom the smartphone is most cruel. And perhaps the best advice that can be given for those of us in this camp is: Deactivate every now and again, go to church, walk outside frequently, and read at least 1 physical book per month. A distracted age is a loud age. Thankfully, the universe is, once you’re able to really listen to it, pretty quiet.