Jews Shouldn't Take the AI Hype at Face Value

Technologies have radically changed religions in the past. But that doesn’t mean they’ll do so this time.

A Google Android logo and mascot at the Mobile World Congress 2024 in Barcelona on March 25, 2024. Joan Cros/NurPhoto via Getty Images.

A Google Android logo and mascot at the Mobile World Congress 2024 in Barcelona on March 25, 2024. Joan Cros/NurPhoto via Getty Images.

Response
March 27 2024
About the author

L. M. Sacasas is the executive director of the Christian Study Center of Gainesville, Florida and author of The Convivial Society, a newsletter about technology and society.

“Tech bubbles,” in the historian Lee Vinsel’s apt phrasing, “are bad information environments.” I think any discussion of AI needs to begin with the acknowledgement that we are in just such a tech bubble. Tech bubbles are bad information environments for at least two related reasons. First, tech bubbles generate significant levels of hype. I’m defining hype as unsubstantiated and misleading claims about a new technology’s power and potential consequences. And hype can, of course, be earnest in some cases and disingenuous in others. In other words some people actually believe the hype themselves, others will use hype more cynically. An information environment constituted by a high percentage of hype can make it hard enough to think clearly about new technologies. But that is only part of the problem. 

Vinsel has also identified what he calls criti-hype. Criti-hype is Vinsel’s way of characterizing what may be well-intentioned critiques of new technologies, which unfortunately take the hype at face value. In other words, while we typically think of hype as having a positive valence, criti-hype has a negative valence, but it is unhelpful because it is insufficiently critical of the hyped claims made for the new technology in question. As information environments, tech bubbles contain high degrees of both hype and criti-hype in varying proportions. To extend the metaphor a bit further, we can conceive of hype and criti-hype as intellectual pollutants compromising the information environment. 

Whatever else we might say about AI right now, it seems clear that we are in the kind of situation Vinsel described. Anyone seeking to learn more about AI, especially someone without any particular expertise in the relevant technical fields, will almost certainly encounter myriad examples of hype and criti-hype. AI is described by some as a revolutionary, world-changing technology which will usher in an age of untold abundance, prosperity, discovery, and flourishing. Others fear that it will destroy the human race as we know it. Take your pick. The point is not that either of these futures will necessarily unfold, of course, only that when you’re trying to get your bearings, the proliferation of perspectives ranging wildly along such a spectrum of opinion, makes it very difficult to think concretely and clearly about the path forward. 

This is bad news for individuals, communities, and organizations who are in the position of having to make decisions about the adoption and implementation of new AI-powered technologies. As Moshe Koppel made clear in his essay about AI and Judaism, religious believers and communities are not at all immune from such challenges or exempt from grappling with the emerging perplexities. Indeed, with my own tradition, Christianity, one could credibly make the case that one of the most significant developments of the last millennium, the emergence of Protestantism, was the unintended consequence of the emergence of an earlier technology, the printing press.

One upshot of our moment, however, is that we are, generally speaking, better positioned to recognize how new technologies might radically re-order religious practices and reconfigure religious communities. So while tech bubbles make it hard to think well about new technologies, at least we are attempting to do that kind of thinking. And we are not alone in the work. Across various traditions, disciplines, and backgrounds, there is a heartening collective effort to think more critically about new technologies for the sake of the truths and principles that matter most. In that spirit, and with all requisite  humility, I’m glad to offer a few reflections on Koppel’s observations about the import of AI. 

I’m appreciative of Koppel’s clarifying comments about the history of AI, his focus on the most popular current tools, and his circumspect qualifiers about the prospects of unabated technical progress along current trajectories. It is helpful to recall the long history of breakthroughs in artificial intelligence alternating with “AI winters” when current models and approaches were exhausted. According to some experts in the field, it may be that, despite all the hype and criti-hype of the last year, we are, in fact, approaching a technical plateau with regards to the capacities of generative AI tools. With Koppel, I think we do best to set aside the most fantastical fears about the future perils of super-intelligent AIs and take up the more realistic challenges we presently face, for example, the possibility that AI tools will lead to widespread unemployment.

For what it is worth, I am probably more skeptical than Koppel about the near-term potential of AI to radically alter the employment landscape. But I hold my skepticism loosely! In any case, I think we have sufficient reason, as things now stand, to think more deeply about the nature of work and its relationship to human flourishing. We have reason to do so not only because of what might transpire in the next decade, but also because of what has transpired in the last two centuries. Part of the challenge of AI is to see it emerging not only within a technical trajectory whose beginning is typically dated to the 1950s, but on a longer techno-social trajectory that reaches back centuries. Within that trajectory we can see that the expectations, fears, and hopes pinned to AI reflect already existing assumptions about the nature of work and the human condition. 

I affirm Koppel’s wisdom regarding the overvaluing of work. Too much of our identity and worth has been hitched to the kind of work we do and our success in it. My own tradition has contributed in complicated ways to this development. If AI makes us rethink this overvaluing of work, we should welcome the opportunity. But not, it seems to me, as what the very online tend to call a cope. In other words, I would not want to use religious categories to produce ad-hoc rationalizations for the disruptions techno-economic forces unleash on human society. 

It is true that human beings are not made merely to work and labor, and the Sabbath is a powerful reality that weekly reminds us of our proper orientation. It is good to cease from our labors and to rest. It is good to remember that we labor for the sake of the greater goods. But the other six days of creation and the mandate to cultivate the garden also remind us that work can be good and fulfilling. As Kohelet reminds us, it is good and fitting for a person “to eat and drink and find enjoyment in all the toil with which one toils under the sun the few days of his life which God has given him.” If human labor is valuable as a site of meaning and satisfaction, we should then be hesitant to see it thoughtlessly automated for the sake of efficiency and profit. 

Again, it is good to remember that AI is not appearing out of nowhere. It emerges out of a powerful technological and economic order, one which is not necessarily ordered toward human flourishing. Technology is a powerful force in human affairs, but so are greed and injustice. Like all technologies, AI is entangled in social realities, and we should not presume that political and economic actors are wholly powerless in the face of inevitable technological change. From this perspective, the most critical contribution of Jewish tradition may be the passionate voice of the prophets calling for justice and accountability, and not only for the potential victims of AI’s future disruptions of the labor market. The marketing around AI, much of it falling under the category of hype, obscures the human and environmental costs of the immense machinery that makes the computational marvels we casually interact with each day.

Koppel’s own work led him to focus quite naturally on AI as a tool for the retrieval and organization of massive amounts of textual information. And I will offer a few comments about what might be at stake for people of religious conviction. But first, allow me to reiterate Dr. Koppel’s own warnings about the dangers of relying on AI tools as religious oracles dispensing the accumulated wisdom of the tradition on demand. 

The dangers Koppel outlines are clear and present. “If the study of Torah and the internalization of its values have inherent worth and are not merely instrumental in deciding on a course of action,” he warns, “trading internalized Torah for oracular answers is a bad deal.” This is exactly the right sort of concern to be raising. 

When presented with the prospect of AI tools that will scour the immense textual resources of the tradition in order to yield on-demand answers for questions of moral and religious significance, the chief concern many will raise will simply be one of accuracy. Will the machine yield accurate answers, or will it hallucinate? Critics of AI may take some comfort in the degree to which LLMs still yield factually incorrect or altogether fabricated responses to certain kinds of queries. We cannot trust the machine 100-percent of the time so we can’t trust it at all, particularly not on matters of highest importance. But this obscures even more vital dynamics at play.

For one thing, it is possible for present technical obstacles to be overcome at some later date. If so, then we should ground our critical perspectives on something deeper than technical shortcomings. We should ask, as Koppel invites us to do, what if those obstacles are overcome? Perhaps they won’t be. But framing the question this way often helps us get to the heart of the matter. Moreover, I think we should acknowledge the degree to which, presented with a less-than-perfect but nonetheless more convenient option, individuals will take it. 

So what then? With Koppel I would urge us to consider that what is most at stake is not the accuracy of the outcome but the morally and spiritually formative nature of the relevant processes. One temptation posed by the technological mindset is that of conceiving of the means to an end as interchangeable matters of indifference. So long as we get the same or better outcome, how we get there, the technical means we deploy, matter not. But this is a grave error. When the character of the individual and the shape of the community is at stake, then how an outcome is achieved is of critical importance. It is one thing, for example, to have the whole of the biblical text ready to hand through a searchable app. It is quite another for the divine commands to be “written on the tablet of your heart.” 

When I consider that many of our most incisive thinkers about the consequences of new technologies have been religious thinkers, I sometimes wonder why this might be the case. There are many reasons, I’m sure. But I close with two of these. For one thing, those who adhere to the first commandment will be less likely to assign divine prerogatives or place transcendent hopes on technology. Secondly, religious convictions, particularly about the human condition, serve as a standard against which technological developments can be judged and evaluated. This may be the great gift that religious traditions can offer a world that might otherwise find itself on a default path toward the eclipse of the human. 

With regard to technologies that promise to automate human activities, then, the question we will need to keep asking ourselves again and again is this: what is it good for humans to do regardless of whether a machine can do it well enough or even better. This, it seems to me, is ultimately a question of religious consequence.

More about: AI, Artificial intelligence, History & Ideas, Religion & Holidays, Technology