Orthodox Jewish girls play with public pay phones on September 08, 2008 in Jerusalem. Lior Mizrahi/Getty Images.
The premise of today’s liberal society is that when individuals choose for themselves the world becomes a better place. The routes by which the Western world came to this conclusion were long and tangled. For some, it was the product of an optimistic Renaissance philosophy of man, free to soar once liberated from constraints of class and custom. For others, it was the result of a pessimism borne of exhaustion after every dream of imposing a better world burned itself out in blood and despair. Economists, for their part, used mathematics and abstract models to demonstrate that allowing the individual to choose promotes the common good. As far as the political mainstream goes, though, all roads lead to the same Rome where the consumer is king and the path to the good life irreducibly individual. However polarized political debate becomes, there is one issue on which both sides sing from the same sheet: only they can be trusted to safeguard individual freedom.
While the boundaries of individual freedom have been extended across every obstacle of gender, sexuality and religion, there is one area which has proved unexpectedly resistant: narcotics. The legalization lobby is currently exulting in a string of victories decriminalizing marijuana and, in some cases, small-scale possession of harder drugs. It is easy to find voices extolling the public-health model of Portugal and predicting the imminent end of the war on drugs. Beneath the surface, however, the drug-legalization movement has lost momentum. Whatever the balance of compassion vs. convictions used to combat drug use may turn out to be, and whatever successes the legalization lobby might have in peeling off cannabis, the dream of treating drugs as just another consumer product is dead.
The dream died because, starting in 1996, America carried out an accidental experiment in drug legalization when Purdue Pharma first released Oxycontin for sale. Marketed as an all-purpose solution to chronic pain, Oxycontin effectively repackaged opiates as medication acceptable to the FDA. The dark and depressing story of how clinics across the United States became distribution centers for a highly addictive drug has, by now, been retold many times. Drug overdoses rose steadily to become the leading cause of death for people under fifty-five, resulting in the first sustained decrease in life expectancy since World War II. The increasing regulation of Oxycontin (and its many sister drugs) has not fixed the issue, since the addicted have switched to easily available and even more toxic street opiates. And so, silently and without fanfare, drug legalization has receded back into the bunker of libertarian student clubs. The war on drugs may be fought with better, gentler tools, but surrender isn’t an option.
It is no mystery why narcotics have proved a stubborn exception to the regime of individual choice. The nature of addiction changes the situation in such a way that the idea of an individual exercising his or her own will becomes meaningless, even absurd. No one chooses to be a junkie. Addictive drugs hack into the pleasure and pain pathways of a brain that was never designed to deal with them. The addict is not a consumer at all, but a slave.
Once, however, we admit that narcotics are by their very nature inappropriate for the sphere of market exchange, a great vista of grey areas opens up before us. Prohibition of alcohol has been tried and failed, but why, exactly, is it acceptable for corporations to market alcopops to young adults as soft drinks-plus-inebriations? Certain compulsively consumed food products with no nutritional value, too, would appear to have been designed to hack into the brain’s dopamine system in a way that can best be described as addictive. The most obvious and egregious example of an analogue to addictive drugs, however, is probably something you have open in a window next to this very article.
Almost everyone now is familiar with the experience of continuing to scroll down Twitter or Facebook or Instagram long after it has ceased to be even marginally enjoyable, the voice of self-recrimination dulled, but not quite silenced, by the formless urge to keep looking at the infinite stream of images and characters. This urge is no incidental side effect. While social-media companies continue to experiment with different ways of generating revenue, the vast majority of their earnings come from advertising. What this means is that the user is not a customer, but something more like a resource to be mined. Social-media companies earn their daily bread by selling access to eyeballs, which they do by designing their platforms to maximize “user retention.” Silicon Valley hoovers up the world’s greatest minds and puts them to work ensuring that your ten-minute break becomes a 40-minute one.
The narcotic-like nature of social media is not just restricted to its compulsive qualities. Solid evidence exists linking extended use to higher rates of depression, anxiety, insomnia, and suicide. This effect is particularly marked among young people and there is now widespread agreement on the reality of an adolescent mental-health crisis the origin and development of which tracks almost perfectly with the introduction and spread of social media use. The effect is particularly severe among teenage girls, contributing to the astonishing reality in which 36.1 percent of American female adolescents go through at least one episode of major depressive disorder.
Given these effects, and assuming we can’t or shouldn’t prevent adults from having unlimited access to social media, the obvious question is: why is it legal for social-media companies to market their platforms directly to people too young to drive, drink, or vote? The question has no clear answer because it has never been seriously posed. Digital innovation runs, as with the market economy in general, on the principle of innocent until proved guilty. Companies may, and do, spring up overnight and offer new products and services that transform the way we live with no obligation to demonstrate that what they offer is harmless. If it later emerges that what they offer has led to a terrifying spike of self-harm among teenagers, well, that can be fixed later. Unfortunately, later never quite arrives, mainly because the pace of technological change easily outstrips the wheels of legislation.
But what if we did the opposite, and subjected every new Internet innovation to the same kind of risk assessment that we are supposed to use to ensure that the new wonder drug isn’t the next junkie fix?
It sounds unlikely, if not impossible, right? The major technology companies and their products are simply too large and too entrenched now in daily life. There is, however, a model for such a process already out there, if you know where to find it. To get a sense of what a world in which new technologies were vetted for suitability before being approved for widespread use might look like, one needs only to cross over the river from Facebook and Twitter’s Manhattan offices and step into Brooklyn.
The use of the Internet and social media within the ḥaredi world is a complex story, one that still awaits a comprehensive treatment. What can be said, however, is that it is now almost universally accepted that any married adult who needs a smartphone for work purposes can have one and remain in good standing as long as the phone has a filter for inappropriate content. At the same time, however, there is a fiercely protected universal prohibition on their use for school-age children, maintained by Ḥaredim all around the world. In different communities and families, the exact time when a bochur (young-adult male) will be allowed access to WhatsApp varies, but across the board, no sixteen-year-old outside the “at-risk” category will be logging on. While the Internet percolates more and more into ḥaredi society, it is scarcely more common to find a ḥaredi teenager with an Instagram account as to find one smoking crack.
This situation was not brought about by conscious design, but by the application of the fundamental ḥaredi principle, first elucidated by the father of Hungarian Orthodoxy, the Ḥatam Sofer, Rabbi Moses Schreiber (or Sofer; 1762–1839): חדש אסור מן התורה. The new is forbidden by the Torah: this mantra, announced as the Enlightenment was getting into full swing, completely flipped the central liberal assumption, defining the posture of separatist Orthodoxy until this day. Every new innovation must be treated as forbidden unless demonstrated otherwise, as guilty until proved innocent.
Historians will continue to debate exactly how Schreiber intended his slogan to be taken, and about how strictly this slogan has been adhered to in practice over the past two-and-a-half centuries, but the Internet has been as textbook an application as you could wish. While other communities adopted new technologies as and when they became available, exploring their benefits and mitigating their harms as best they could, the message from the ḥaredi rabbinic establishment was simple: no. Fresh from their victory against the television set, ḥaredi leaders were convinced that they could succeed in excluding the Internet from the ḥaredi home entirely.
It proved to be otherwise. Bit by bit, ordinary Ḥaredim asked for rabbinic allowances (or simply wrote their own) to set up businesses on Amazon, to use voice messages with their contacts in real estate, or to download lessons on the Talmud. Eventually, Internet use in various forms and degrees became more and more normal, as practically-minded community members explored ways they could use it to make their lives easier, or found that they simply could not do without. A million tiny cracks emerged in the edifice of blanket opposition to the Internet; today, WhatsApp is the single biggest disseminator of news and opinion among thirty-year-old Ḥaredim.
The more traditionalist or extreme elements in the ḥaredi community lament this laxity and there are occasional campaigns against this or that element of Internet usage, with varying degrees of success. There are, however, some elements of the Internet where the traditionalists have gotten their way and even the most modern and open-minded sections of the community are happy for it. The standout example is social-media use for children, a matter on which there is no question of balancing the benefits against the harms; it is all harm.
In other words, by starting with a blanket ban and forcing each beneficial aspect of the Internet to justify its utility, the ḥaredi community has managed to devise precisely the kind of filter that the wider world lacks and has successfully prevented Silicon Valley from monetizing adolescence in their community.
The success that ḥaredi Jews have had in fending off Silicon Valley, and the freedom for their children that they have won, reveals a number of paradoxes inherent in liberal society. They were achieved—and could have only been achieved—by means of the renunciation of individual autonomy that every ḥaredi Jew must make as the cost of remaining in the system. While parents in the mainstream world are forced to accept that how their teenage children spend their free time is a decision that will be made for them by tech oligarchs, a large proportion would, in a heartbeat, turn the whole thing off if they thought they could. This choice, effectively denied to the typical suburban mom and dad, is one that ḥaredi parents are only at liberty to make collectively because, paradoxically, they have forgone their individual right to make it.
The most obvious fact about ḥaredi life in the context of a wider liberal society is the astonishing range and degree of restrictions its members live with. Simply adhering to halakhah, of course, is constricting enough: an observant Orthodox Jew cannot choose what food he eats, how often he prays, or even whether he wants to flick a switch every seventh day. As if dissatisfied with the life-encompassing regimen contained within the Shulḥan Arukh, however, ḥaredi communities impose an extra set of restrictions covering everything from what you wear to the type of music you can enjoy. Sympathetic accounts of ḥaredi society, as with accounts of other separatist religious groups, tend to focus on the benefits members receive in exchange for these restrictions: typically, community, a sense of belonging, and family cohesion. The case of social media, however, reveals something more profound about the nature of individual choice in a traditionalist community. The average ḥaredi parent has the freedom to deny social media to his or her child precisely because the marginal ḥaredi parent doesn’t. The mechanism by which social-media use has spread through society was simply cut off at the beginning: you don’t have to worry that your daughter will be a weirdo Bronze Age pariah if prevented from logging onto Instagram because the cool kids aren’t allowed to use it, either.
The specific way that this works in practice—the culture’s homegrown enforcement mechanism—is especially illustrative.
There is no area of ḥaredi life where the lack of individual choice is felt more keenly than schooling. Countless ordinary ḥaredi parents send their kids to schools where they’re dissatisfied with the facilities or the standards of secular education because they don’t have any better options. And yet the primary means by which the prohibition on social-media use is enforced in the community is through what is an otherwise rather dysfunctional school system. Quite simply, no ḥaredi school will accept a pupil unless the parents have agreed to deny him or her any access to social media, and understand that a breach of this agreement will result in expulsion. The minority of Ḥaredim who might be tempted to give their son or daughter a half-hour on Facebook here and there are sufficiently frightened by this prospect not to do so; and so adolescent social-media use remains a fringe phenomenon.
It therefore seems inadequate to say that Ḥaredim merely give up freedoms treasured by the outside world in exchange for other benefits. The practical result of renouncing their individual choice is to give them the ability to negotiate global economic and social trends as a bloc, increasing their effective freedom to choose what aspects of the Internet to allow into their home. Just as individual members of labor unions forgo the right to make individual deals with employers so as to raise the average wage, ḥaredi parents waive the right to make their own compromises with the world of social media so that the job of raising children is easier for everyone.
The fait accompli of near-zero social-media use among teenagers is something that the ḥaredi community has achieved by breaking the fundamental rules of liberal society. So what lessons are there for the wider world? The path that it has taken to get there is not one that many would choose to follow even if they could. Some might conclude that they’re faced with a binary choice: join a separatist community or simply go with the flow of whatever capitalist modernity serves up to them. But that isn’t the only conclusion, or even perhaps the most compelling one.
Regardless of how it got there, the ḥaredi community has demonstrated to the world that the Internet isn’t a package deal. You can pick and choose. You can get all the good, time-saving, money-earning elements that you want, and raise children who never worry about how many likes their most recent selfie got. Market forces won’t solve this problem for you, and government isn’t about to step into the breach. either, but the mechanism that Ḥaredim have evolved to enforce this regime is open for wider use.
Taking the ḥaredi social-media model society-wide would be, in principle, quite simple, though it would still require enormous reserves of will. In order to wrest back control from social-media oligarchs, what parents at any given school need to do is come together and vote to demand a school policy that prohibits pupils from social-media use and makes admission contingent on compliance. Of course, this looks like a drastic restriction of the rights of parents to bring up their children as they wish, and of course it is. It is also the only practical way to give parents the actual choice not to let their children use social media. Those who are satisfied delegating their responsibilities to Mark Zuckerberg can always choose a different school. At first, this kind of measure is probably only realistic in private single-faith schools where parents can draw upon the reserves of shared common culture. However, if enough schools can successfully act as a unit to give parents back the right to refuse Silicon Valley’s blandishments, it could spread to become the norm, at least in middle-class suburban areas.
Defenders of the liberal society have often referred to the opportunities it affords its adventurous members to conduct “experiments in living,” the results of which the majority can draw upon to make informed choices about how to conduct their lives. Not every experiment in living, however, can be carried out by individuals; some require a community to opt out of the mainstream and test the bounds of the possible together. In communities like these, solutions can be found to the kind of missteps that are inevitable in the age of accelerating technological development. Ḥaredim have not only shown that they can allow the Internet into their community while rejecting its unambiguously bad aspects, they’ve also shown that others can do it, too. Subjecting children to the dangerous narcotic of social media isn’t something we all just have to put up with as part of the price of living in the modern world. The formula for excluding social media from childhood is there for every community with the courage to adopt it.