What Artificial Intelligence Has In Store for Judaism

AI has the potential to change the way Jews study Torah, observe Jewish law, work with rabbis, and teach their children. Will Jews resist those changes or welcome them?

An image created by the AI program DALL-E showing a painting, done in the style of Rembrandt, of Jewish scholars using an AI program on a computer. DALL-E/Open AI.

An image created by the AI program DALL-E showing a painting, done in the style of Rembrandt, of Jewish scholars using an AI program on a computer. DALL-E/Open AI.

Essay
March 4 2024
About the author

Moshe Koppel is a member of the department of computer science at Bar-Ilan University and chairman of the Kohelet Policy Forum in Jerusalem. His book, Judaism Straight Up: Why Real Religion Endures, was published by Maggid Books.

The industrial revolution brought freedom and prosperity to millions in the 19th century, while also presenting considerable challenges to millions more, perhaps especially to religious communities in general and to Judaism in particular. Opportunities for migration and urbanization, the diminution of communal interdependence, exposure to alien ideas and the breakdown of religious authority—all threatened the very survival of those communities in novel ways that demanded novel responses. Looking back, it would hardly be an exaggeration to say that it has taken centuries for Judaism to adapt.

The coming information revolution, of which artificial intelligence (AI) is the most notable and best-known example, will no doubt offer great benefits, but will present even more serious challenges than the industrial revolution. As I will explain in a moment, these challenges include threats to human safety generally and to religious communities specifically. But, at the risk of sounding parochial in the spirit of Hugh Nissenson’s short story “The Elephant and My Jewish Problem” or the satirical headline “World Ends: Minorities and Women Hardest Hit,” I will mostly keep my focus here on how traditional Judaism might be forced to grapple with the challenges and opportunities presented by AI.

I will make two main points. First, drawing on work being done in my own AI lab in Israel, I will show how AI can provide tools that benefit Judaism by making Jewish texts and ideas more accessible. Second, I will suggest ways in which Judaism might, in return, offer models for purposeful and meaningful living, even as ubiquitous AI threatens to attenuate some of our deepest social and moral attachments.

 

I. What Is AI?

 

The first thing to understand about AI is that it is not new—it did not spring up fully formed in the mode of ChatGPT or other attention-getting recent products. In fact, it has taken decades to get to this point. Roughly speaking, AI began in the 1950s as a loosely-connected amalgam of attempts to get computers to perform activities typically done by human beings—playing games like checkers and chess, making medical diagnoses, and so on.

In the early days, many efforts focused on formalizing information provided by experts in relevant fields. But by the 1980s, it became clear that a better approach, eventually termed “machine learning,” would be to bypass human experts and instead assemble a large collection of training examples. Then, a computer could be programmed to apply mathematical methods to search for and extract rules that explain the examples.

For instance, if we want a system that can use a patient’s vital statistics and symptoms to determine if the patient suffers from, say, hepatitis, we would gather a training set of past patients for whom we have a record of relevant statistics—temperature, blood pressure, pulse, etc.—and symptoms, as well as the correct diagnosis—whether the patient indeed had hepatitis. The former set of data are called the inputs; the latter, the output. Then, we would apply an algorithm to find a set of rules that are consistent with all these cases, rules that map a given set of input values to the correct output, in this case, which combination of statistics and symptoms were most likely to correlate with hepatitis. And then we’d apply the rules to new patients we wish to diagnose, and if the training has worked and the program powerful enough, we’d anticipate generally getting the right answer.

One very simplistic approach to the machine-learning method, developed already in the 1950s, assigns a certain amount of importance to each input and then aggregates them into a positive or negative response. Programs like these are called “linear classifiers.” Linear classifiers often gave reasonable results in simple cases, but it was well-understood even early on that most problems could not be solved by such methods, which turn out to be too simplistic. Most phenomena can’t be explained merely by adding up various input values, in large part because subtle interactions among the inputs also need to be accounted for. One attempt to overcome the limitations of linear classifiers was, very roughly speaking, to stack up linear classifiers in layers, so that the outputs of multiple linear classifiers at one level would become the inputs to other ones at the next level. These stacked systems were called “artificial neural networks” because in some abstract way they resemble the neural structure of the brain.

Any crackpot conspiracy theory can account for every loose end if no limit is placed on the fancifulness of its explanations.

Neural nets overcome the limitations of linear classifiers but face an opposite problem. They offer such a richness of possible mappings from inputs to outputs that they can easily find the wrong one—in other words, a mapping that would explain a given set of training examples but generalize very poorly to new examples. Think of it this way: any crackpot conspiracy theory can account for every loose end if no limit is placed on the fancifulness of its explanations; but such theories have no predictive power about future events. Similarly, the larger the neural network used, the greater the danger of finding bogus mappings. This is known as the problem of overfitting. Of course, the fewer the available training examples available to constrain the possible mappings, the greater the danger of overfitting.

Neural nets were only one of many approaches to AI, and far from the most successful or popular until a revolution took place sometime around 2012. Once data-gathering companies like Google had piled up immense stores of data that could be used as training examples, they were able to use huge neural networks to process these data, aided by crucial efficiency improvements in both software and hardware.

One example of the many challenges handled using such methods is the recursive generation of the next word in a text, given some prompt (itself a string of words). By recursive, I mean that after generating a particular word, the system appends the generated word to the existing string of words in order to generate the word after that one, and so on. Very large neural nets with lots of training data can use this recursive method, quite astonishingly, to generate intelligible responses to prompts. Such neural nets are known as “large language models” (LLMs), the most famous of which are the various versions of ChatGPT.

What is remarkable about this is that, so far at least, the larger the neural net, the better the responses; the quantity of training data has been ample enough to overcome the problem of overfitting. For instance, any user can observe that GPT-4, with over one trillion parameters—a measure of the size of a neural network—exhibits significantly better performance than GPT-3, which has only a few hundred billion.

It’s important to bear in mind that while large language models have captured the most attention thanks to the astonishing success of ChatGPT, neural-net technology is applicable to any problem for which there are huge quantities of training data available. Thus, for example, it can be and has been used for tasks like generating or recognizing images, translating languages, and playing games like chess at levels far beyond human champions. Neural networks can also be embedded in physical systems like self-driving cars, robots, and medical diagnostic tools, to enable prediction, planning, decision-making, and other complex tasks.

Anybody who has played with ChatGPT or any of the apps that generate images or video based on a text prompt provided by the user has a sense of how good tools built on neural networks have become. The most urgent question now is how quickly they will improve. Based on the current rate of improvement, it has been argued that within around twenty years, AI will perform all cognitive tasks as well as humans. And once that is achieved, millions of artificial programmers could then be enlisted to achieve quickly general intelligence far beyond that of any human being.

These projections need to be taken with many grains of salt. We don’t actually know that we can continue growing neural nets without running into the problem of overfitting, or that processing power will continue to grow exponentially, or a whole lot of other assumptions underlying these prognostications. Nevertheless, it’s safe to assume that in relatively short order AI will be able to duplicate and exceed the performance of the most skilled humans in a wide variety of cognitive tasks.

On one hand, this seems like a tremendous boon for humanity, potentially freeing up valuable time and resources and providing access to knowledge that not only enriches our intellectual lives but also serves as a catalyst for dramatically improved health and prosperity. But there are potential costs as well. Some of them apply to all people, while others pose problems for Jews in particular.

 

II. Will AI Kill Us All or Just Put Us Out of Work?

 

Much of the public discussion about the perils of AI has centered on speculation that a general super-AI would use its vastly superior intelligence to destroy the human race, either out of malice, or out of indifference as it pursues its own self-interest, or even out of a mistaken understanding of the best interests or desires of its human creators. It is indeed possible that this may happen, that someone might let AI operate real-life systems with no humans or programmed circuit-breakers in the loop (or that AI might figure out how to escape any limits placed on it), but for now we ought not let the sci-fi nerds dominate this conversation. There are more realistic dangers to consider.

Just as AI has the ability to show you how to fix your bike, if unchecked it can also show you how to prepare biological weapons in your basement.

LLMs, for example, may seem rather benign; after all, all they can do is generate text. But just as an LLM has the ability to show you how to fix your bike or prepare spaghetti Bolognese, if unchecked it can also show you how to get away with murder or prepare biological weapons in your basement. LLMs are extremely powerful, and as they grow their power will become available to many bad actors.

More prosaically, even if AI is not abused, there are other risks to consider. AI threatens to make many jobs obsolete, white-collar jobs in particular. LLMs are already proving to be invaluable for research—including high-level financial and legal research—and for computer programming. It probably won’t be long before many, if not most, lawyers, financial analysts, computer programmers, and other professionals are overtaken by AI.

How bad will this be? Economists often note that while the industrial revolution did put candlemakers and buggy-drivers out of their jobs, it also generated many more opportunities by moving the frontier of employment options outward—candlemakers learned to become electricians and buggy-drivers to drive cars. A similar shift might occur with the AI revolution. But it is also possible, even likely, that the rapid pace of AI advancement will lead to more radical job displacement than during the industrial revolution, while any new opportunities created might themselves be automated by AI faster than people can adjust.

The problem is not primarily an economic one, at least if the economic benefits of AI’s abilities are made sufficiently available to the general population. From an economic standpoint, machines that could effortlessly perform tasks with super-human speed and precision could make everybody wealthy. In fact, the cost of life’s necessities may be reduced so radically as a result that those who can’t be employed will not need to be employed.

The crucial question that follows from this possibility, then, is not economic but rather existential: how important is employment to our sense of purpose and self-worth? The example of many under-employed Americans—as has become well-known lately from the declining wellbeing of middle-aged white men without college degrees—is not encouraging. Life expectancy for this demographic has dropped in recent years due to so-called “deaths of despair”—drug overdoses, suicides, and alcohol-related diseases.  In short, the most significant, non-fanciful challenges raised by the proliferation of AI are likely to involve the loss of a sense of identity and purpose.

 

III. What AI Can Do for the Jews, and Vice Versa

 

Jews are not exempt from any of these concerns, neither as individuals nor as members of a nation with a particular intellectual and religious heritage that includes many moral commitments, rituals, and values. Indeed, AI is liable to change the way Jews adhere to their tradition, study Torah and other texts, observe Jewish law (halakhah), and teach their children and students.

In fact, tools along these lines are already being built—including by me and my colleagues. I direct a non-profit lab in Israel called DICTA, which is devoted to building tools to aid and deepen the study of Jewish texts. Since that’s the realm of AI I am presently most intimate with, I will start with how our program works and what dilemmas it raises.

AI is liable to change the way Jews adhere to their tradition, study Torah and other texts, observe Jewish law (halakhah), and teach their children and students.

Rabbinic literature, beginning with the 2nd-century Mishnah and continuing to the present day, is a massive collection of works on Jewish law and lore. It is studied by millions of Jews across the world looking for practical guidance on moral and legal matters, for intellectual interest and satisfaction, and in fulfillment of the critically important mitzvah of Torah study. Yet the study of this literature presents formidable challenges to non-experts. The vast majority of classical Jewish texts are in Hebrew and Hebrew-Aramaic, have not been digitized, and lack the vowel markings and punctuation familiar to speakers of modern Hebrew. They are also rife with ambiguous abbreviations and unspecified references to previous literature. This jungle is part of the charm and beauty of Jewish religious literature, but it is also, understandably, not easy for many if not most readers to find a path through.

Thanks to the efforts of several organizations over the last few decades, a small portion of this vast corpus has been made significantly more accessible by human annotators. But this process requires manual effort so painstaking it can take years to complete a single volume: reading and annotating each word of long and typically terse and complex texts. To get a sense of the size of the challenge, consider some well-known collections: Sefaria, which includes commonly used works, consists of around 1,000 volumes, Bar-Ilan’s Responsa Project, which also includes works likely to be referenced only by experts, consists of around 10,000; and Otzar HaHochma, which includes whatever its organizers can get their hands on, has over 100,000 items.

Recent advances in AI, however, will soon make it possible to process entire libraries of such material automatically and swiftly. Jewish-oriented AI programs will scan old books, digitize them, correct scribal or printing or optical-character recognition errors, add vocalization and punctuation, open abbreviations, and identify citations and paraphrases of previous literature. All this can be done in minutes per volume. Soon, modernized versions of the full corpus of Jewish literature, a few copyrighted works excepted, could be made accessible to anybody with the requisite linguistic skills and knowledge.

Indeed, it should not be difficult to see how many of the tasks involved in annotating rabbinic texts—error correction, vocalization, punctuation, and opening abbreviations—are related to the task of constructing LLMs. If in some context we can accurately predict the next word (or, more broadly, assign probabilities to different possible next words), as we do in LLMs, then we can use this ability to identify possible errors in a text (a word we were not expecting), add vocalization (think of different possible vocalizations as different words and choose the expected one), add punctuation (think of punctuation marks as kinds of words), open abbreviations (which expansion consists of expected words), and so on.

Again, these are processes that can and have been done by humans working manually, it just takes them a very long time. AI not only stands to speed them up—it also stands to enable acts of scholarship that were previously inconceivable.

For instance, some AI tools now allow for searches by concept and not just by keyword. That is, if I search for a particular word or phrase, I’d receive results that include synonyms of my search term or that involve closely related topics.

What’s more, AI can register connections between separate documents thousands of years old, and it is now almost possible for a program to link every citation of every text in our corpus. This is a stunning possibility, in effect creating a map of all Jewish scholarship, no matter how old. Through this power, we can determine which texts are most authoritative on a given topic—because they are frequently cited by later authorities in discussions of that topic—and which aggregate the most previous literature on the topic. These texts are called authorities and hubs, respectively. Because they represent a kind of multi-generational consensus that can be used as a guide for contemporary practice, authorities and hubs are much more valuable sources than the random texts that happen to mention the topic at hand typically spit out by search engines.

 

These new abilities lead naturally to a question I am now frequently asked: given the ability of LLMs to respond to questions, can we dispense with searching altogether and simply ask an LLM direct questions about rabbinic literature? In other words, can we get reliable rulings on halakhic questions not from rabbis but from computers?

Of course, even asking such questions opens many cans of worms. To begin to answer them, we must first understand a bit about the different kinds of matters of Jewish law that are directed to rabbis, and the varying degrees of authority needed to address them. These fall into four broad categories.

The simplest of these matters are “lookup questions” that can be easily answered by anyone with the ability to check standard reference works. For example, what is the blessing to be recited before eating blueberries: the blessing for fruits of the tree or fruits of the ground? Not a hard question to answer as long as one has access to the right sources.

A level of complexity higher are questions, typically addressed to a local rabbi, that might involve specific cases of well-understood general rules: some accident, say, involving interactions of milk and meat, and how to remedy them.

Further up the chain are questions in which delicate matters of personal circumstance might be relevant, like when to invoke exemptions from certain restrictions on marital relations after menstruation.

And finally, there are novel issues likely to become broadly significant, such as whether new kinds of digital technology can be used on Shabbat.

The best current LLMs already have the algorithmic ability to handle many lookup questions quite easily, and in due time will handle them all. At the moment, though, they have not been fed enough Jewish or rabbinic text in their training data to reliably address all such questions. They also “hallucinate” from time to time, fabricating spurious texts and claims—not something one wants to hear when asking about matters of religious, legal, or metaphysical importance. Still, as the training pool grows and the technology improves, these problems too will be overcome.

Standard questions routinely answered by any competent rabbi will also be reliably answerable by AI in the next few years.

Likewise, although they are often poorly handled by current technology, standard questions routinely answered by any competent rabbi will also be reliably answerable by LLMs in the next few years.

Will traditional Jews allow either of these practices to happen? No doubt bringing AI into even the lower rungs of the halakhic process will be resisted at first, especially by rabbis, for educational and cultural reasons I’ll discuss below. But once they achieve a sufficient level of reliability, LLMs will become as widely accepted as were books after the invention of printing and search engines in the age of the Internet.

As for the higher levels of halakhic matters, such as delicate personal matters that require sensitivity to a questioner’s individual circumstances, observant Jews will surely and justifiably be loath to rely on AI to resolve them. But that doesn’t mean AI will play no role at all in answering them. Instead, it’s likely to be assigned a helping rather than a deciding role. There is no reason AI couldn’t eventually sketch out a range of possible rulings a rabbi could make about a given dilemma, as well as the specific parameters upon which the decision might depend, while still leaving the final decision to a human authority familiar with the situation of the questioners.

The same will probably go even for the highest level of the Jewish legal process: decisions on entirely novel matters that require the insights of leading authorities. In my view, it is likely that one day LLMs will be used by such authorities as important references, just as they might currently use search engines, even as the ultimate resolution of these matters will rest on expert authority and popular consensus. (I’ll consider an example shortly.)

Again, for now the current state of the technology is far from the levels of reliability that would be required for even minimal rabbinic trust in AI. But it is already clear how that can be improved. The most promising method involves initially searching a comprehensive corpus for relevant texts, and then using those findings to inform the prompts given to the LLM. This is a technique known as Retrieval Augmented Generation, and it can be automated, as demonstrated by a prototype currently in development at my lab, DICTA. By focusing the LLM on relevant sources, we can prevent the kind of baseless speculation and even hallucination that LLMs are currently prone to engage in.

Still, one thing we have learned from bitter experience is that this process is not easily optimized. It is not a simple matter to use a search engine to find sources that will augment a user’s question in a manner that will elicit a useful response from an LLM. The particular prompt that a user provides—for instance, a query regarding the permissibility of preparing instant oatmeal on Shabbat—is much more likely to include unhelpful terms specific to the case at hand like “oatmeal” than it is to include the search terms that will yield the most relevant sources—for example, “gibul,” the technical halakhic term for mixing or kneading prohibited on Shabbat. Thus, in general, if we were to naively base our preliminary search for sources directly on the user’s query, we’d be unlikely to come up with helpful sources. Rather, current LLMs need to be used in a step-by-step process that first extracts relevant halakhic principles for the specific matter at hand, then translates these into optimal search terms, then searches for these terms in those books that are determined to be both authoritative and most likely to deal with closely related questions and so on. This isn’t easy, but it’s doable.

So let’s assume, quite plausibly, that all these wrinkles will be ironed out within a few years. Will the resulting picture of technologized society be good for the Jews? This question, in turn, leads to some deeper questions about AI and the human condition, and what Judaism has to say about them.

 

IV. The Danger of Social Detachment

 

In some ways, observant Jews might be well situated to deal with the world redrawn by AI. As we saw, a society in which most people do not need to work is a society doomed, at least in part, to boredom and meaninglessness, to mindless entertainment or drug abuse.

Even now, many people, especially young people, are addicted to their devices, often at the expense of direct human interaction. A particularly extreme example of this phenomenon, one that is likely to become more commonplace in the coming years, is that of AI companions—virtual friends or even romantic partners. These are currently LLMs accompanied by custom-designed avatars, but once synthetic video features are added, they could become entrancing to the point of total immersion, especially when combined with new virtual reality gadgets like Apple’s Vision Pro headset. Like the problem of under-employment, this possibility suggests the specter of increasing isolation and ennui.

Luckily for Jews at least, Jewish practices already include several bulwarks against such specters. It’s even possible that one day the secular world will look to them for inspiration. In part this is because Judaism in some crucial ways protects Jews from overvaluing work—which means that the loss of work may be less devastating to them. Here I’ll highlight two such ways: batei midrash (Torah study halls) as venues for meaningful intellectual endeavor, and Shabbat as a day of respite and reflection.

Judaism in some crucial ways protects Jews from overvaluing work—which means that the loss of work may be less devastating to them.

To my mind, batei midrash show real potential as a blueprint for the productive use of surplus leisure time. One version of this model of learning involves the study of classical texts with a partner in a study hall as an almost full-time vocation. Indeed, dozens of communities in Israel, the United States, and Europe are centered around such study halls. They have been much criticized, often with some merit, by those who regard them as encouraging shirking and sponging; but it might very well turn out that they were simply ahead of their time and will ultimately serve as examples of the constructive and fulfilling use of our time if we are freed from the need to earn a living.

Comparing the beit midrash model to the university system is instructive. Unlike the latter, where lectures predominate, batei midrash emphasize sustained engagement with texts through chavruta, or partnered study. Sessions often commence early and extend into the night, with the collective environment of a single study hall fostering a culture of diligence and accountability within and between partner groups. There, participants forge profound and lasting friendships. Moreover, batei midrash welcome adults of all ages, creating a diverse and mature learning community. Thus, a beit midrash is not a transient phase for socializing and credential-building like a university is, but a place where adults invest time in rigorous scholarship that imparts significance and direction to their lives.

Certainly, this solution won’t work for everybody, as in the old Jewish joke that Paradise is where the righteous spend all their time in uninterrupted study and Hell is the same thing for everybody else. But the model could be extended to a broader range of meaningful academic and creative pursuits. The beit midrash represents a time-tested paradigm that, when adapted to the social and intellectual situations of different types of communities, might pave the way for many, both within and beyond the Jewish sphere.

For some of the same core reasons, Shabbat may also work as a remedy for the social illnesses caused by too much AI. Shabbat, in traditional Jewish practice, is a 25-hour period in which most forms of creative effort are prohibited. Travel, food preparation, the use of most appliances and pretty much any non-essential activity you can think of must be avoided. Instead, time is spent in communal prayer, festive meals with family and neighbors, or curled up with an old-fashioned book.

In other words, Shabbat is almost tailor-made to force us off our devices and to promote uninterrupted quality time with family and friends. The full panoply of Shabbat rules is regarded by Jews as a special gift to the Jewish people, but the main idea could well serve as a model for any society looking to break free from the incessant lure of online activity.

But there’s also another specter here: it can’t be taken for granted that, when faced with novel and sweeping challenges of the sort that AI will present, the laws of Shabbat will seamlessly adapt.

Consider one example: observant Jews don’t ride in cars on Shabbat because driving a car involves causing combustion, a forbidden act on Shabbat. Imagine now an autonomous vehicle that can be programmed before Shabbat to pick me up at a specified location and take me to a specified destination. The car’s operation would be independent of my actions so that neither I nor anybody else would perform any forbidden act. In fact, skip the preprogramming and suppose that AI has learned my own patterns of behavior sufficiently well to anticipate my intentions and desires and simply activates various machines—cars, appliances, gizmos that don’t exist yet—just when I need them, without any intentional act on my part. In short, imagine that we can get the desired consequences of what are now regarded as forbidden actions on Shabbat without performing those actions ourselves.

It can’t be taken for granted that, when faced with novel and sweeping challenges of the sort that AI will present, the laws of Shabbat will seamlessly adapt.

Here’s the dilemma. It might turn out if we are too stringent with such matters, it would be impossible to function at all in typical circumstances, and that if we are too lax, Shabbat would lose all substance and be indistinguishable from any other day. Will we find that sweet spot where Shabbat still resonates? This is a serious consideration. Though they wouldn’t use the term “sweet spot,” the most serious halakhic authorities think deeply about such matters, even as they couch their decisions in textual and technical terms.

Let’s take as an example an actual ruling regarding the use of technology on Shabbat. In it, Rabbi Asher Weiss, one of the most respected living halakhic authorities, considers two questions involving the triggering of electronic devices on Shabbat. These devices are not quite as fanciful as autonomous vehicles, but can be seen as anticipating them.

The first question asked in Rabbi Weiss’s 2013 ruling is if it is permitted to use devices deliberately designed to circumvent Shabbat prohibitions through complicated trigger mechanisms, like a light switch designed so that when it is touched it does not directly turn on a light, but rather sets off a more complicated process in which an impediment to a pre-existing light trigger is removed. The argument for allowing such a device is that a person hitting the switch is only indirectly causing what ordinarily renders the act forbidden. The second question regards walking in public places where sensors will inevitably be triggered, causing, say, traffic lights to change or streetlights to go on or off.

It is not difficult to see that a permissive answer regarding the first question is liable to turn Shabbat prohibitions into something of a joke, easily circumvented with trick mechanisms. It’s just as easy to see that a strict answer regarding the second question could one day make it impossible for a Sabbath observer to navigate any public places, and probably most private places as well.

Rabbi Weiss makes no secret of the fact that he is aware of this dilemma. But in his final decision he offers his thinking in technical terms, laying out all the relevant legal arguments for permitting and prohibiting, as if he does not know in advance where the analysis must lead.

As he points out, the crucial matter in the first case involves what kind of actions with certain indirect consequences are permitted on Shabbat. The Talmud mentions two examples of such actions, one prohibited and the other permitted; commentators have long labored to define the boundary line between the two. The forbidden case is winnowing wheat by throwing it in the air so that the wind separates out the chaff. This is regarded as indirect because its successful execution depends on an outside force not under the actor’s control, but it is nonetheless prohibited. The permitted case is one in which, to stop a fire, drums filled with water are positioned such that approaching heat will burst them and their water will extinguish the fire.

Rabbi Weiss notes here that among early commentators there are two main approaches to distinguishing between the two cases. Some point out that winnowing and extinguishing fire are each explicitly listed among the 39 forbidden acts on Shabbat—and each is forbidden in its usual manner. Winnowing is inherently a matter of indirect action, while extinguishing is generally performed directly and is hence permitted when performed indirectly. Others emphasize a different distinction: winnowing involves an outside force, but the result is still both inevitable and immediate, while in the case of the barrels the result is neither inevitable nor immediate.

In the end, Rabbi Weiss decides similarly, prohibiting the use on Shabbat of devices based on indirect mechanisms because the result is both inevitable and immediate. He adds that even if the mechanism were designed to incorporate some small amount of uncertainty or delay, these would be insignificant and irrelevant. The use of such mechanisms should in his view be regarded as direct, rather than indirect, action.

Turning to the second question—the unintentional triggering of sensors in public places—Rabbi Weiss notes the established principle that unintended consequences render an act forbidden when those consequences are inevitable. In such cases, the consequences are predictable and hence can’t be regarded as unintentional. Thus, it might be thought that traversing areas with such sensors might be forbidden. Nevertheless, he avers, when the problematic consequences of such traversal—the triggering of sensors—are both unintended and indirect, there is room for leniency. Thus, he rules that, though it is best to avoid such situations, one need not refrain from walking in the street where sensors are deployed.

This is but one very attenuated summary of a single discussion from among a voluminous literature. It illustrates how much leading authorities are aware of long-term policy consequences even as they analyze technical aspects of halakhic issues. At the very least, the open-ended character of such discussions strongly suggests that multiple approaches can and will be tried and the ones best adapted will survive. In short, it is likely that the sweet spot between stifling strictness and shortsighted leniency will eventually be found, allowing Shabbat to sustain its role in fostering genuine and direct human connection.

 

V. The Dangers of Moral Detachment

 

When we need to make decisions about personal or ethical matters—whom to marry, how much to give to charity and to whom, whether to offer our seat to someone on the train, and so on—we usually invoke both an intuitive feeling about what’s right and more analytical utilitarian reasoning that weighs the potential outcomes and benefits. Those intuitive feelings are rooted in some evolved hard-wiring common to all human beings, as well as in the particular religious and social norms that we have acquired from our families and attachments. The cold utilitarian reasoning is in some sense formulaic, so that machines are potentially better at it than people.

Precisely for this reason, there is a risk that as AI becomes more integrated into our daily lives, we might start to rely excessively on its apparently impersonal logic to free us of the need to make our own moral and personal choices.

As AI becomes more integrated into our daily lives, we might start to rely excessively on its apparently impersonal logic to free us of the need to make our own moral and personal choices.

The pitfalls of purely utilitarian reasoning are well-established. A classic example is the case of the sacrifice of a healthy person so his organs can be harvested to save the lives of five patients at the expense of one life. Perhaps less fancifully, it is instructive to contemplate what happens when the wonky mindset that tries to solve every problem by modeling and quantifying is applied to determining how charity could be distributed in the most efficient way. So, for example, if we naively assume that we want to maximize aggregate utility in the world, we are sucked, absurdly, into what philosophers call the “repugnant conclusion”—that we should strive to have a vast number of people living barely tolerable lives rather than a smaller number of people living good lives. Of course, we can backpedal at this point and complicate our definition of aggregate utility, but the experience of the Effective Altruism movement suggests that the backpedaling never ends and the whole process depends much more on the hard problem of figuring out what you want to maximize than on the technical problem of how to maximize it. In other words, we need to be very careful about understanding our values and goals before diving into optimization strategies that, while mathematically sound, may lead to morally questionable or undesirable outcomes. It is not clear that AI is suited to this task.

Compounding this concern is the potential for AI to propagate one-size-fits-all norms that don’t account for the unique character and context of different societies. At any given moment, a handful of state-of-the-art LLMs may dominate the field, embedding their particular biases into what could become a de-facto standard of “wisdom.”

Furthermore, a trained LLM can be rather easily manipulated in a secondary training phase, called the “alignment” phase, to take on any particular ideological valence—as recently became hilariously and frighteningly obvious in Google’s catastrophic rollout of its uber-woke LLM, Gemini. Thus, it may happen that a plethora of available alignment biases could balkanize society in ways that make the polarization caused by social networks look like child’s play. These two dangers, while apparently opposites, are not mutually exclusive: a highly fragmented society could still suffer from certain widespread biases.

Judaism offers an alternative to the challenges of untested, detached utilitarianism. It is a slowly evolving moral system that has adapted over centuries and millennia by combining utilitarian considerations with hard rules, by encouraging lively debates pitting differing human intuitions against each other. Through this tradition of documenting and preserving intuitions over time, Judaism avoids arid moral determinism and encourages the ongoing cultivation of moral intuition relevant to specific circumstances. Thus, it steers clear of the hazard of discarding time-honored wisdom for the allure of modern, yet unproven ethical frameworks—wisdom that may appear archaic in comparison with new ideas, but that, as we might discover too late, has survived for good reasons.

This process is familiar to anyone who has studied Talmud and the development of halakhah in the literature it spawned. But let’s consider here a single example that illustrates the way a halakhic discussion weaves together utilitarian and moral reasoning, extends over two millennia, invokes moral intuition rooted deeply in the relevant literature—and is especially apt since it will take on particular relevance as a result of an application of AI.

Imagine you find yourself losing control of your car and are suddenly faced with the decision to crash into a rather unforgiving wall, possibly killing you, or into another car, which would absorb some of the impact energy and probably save you but might kill other people. You probably haven’t contemplated this dilemma—and even if you have, your conclusions might not prove useful during the actual few critical seconds. But when programming autonomous vehicles, engineers need to work such decisions into the code. How can such difficult ethical questions be handled by engineers or, more likely, regulators?

The most relevant philosophical literature is that regarding what are known as “trolley problems.” The best-known of these problems, introduced by the British philosopher Philippa Foot in a 1967 paper and now part of philosophic folklore, is one in which a trolley driver can either stay the course and run over five people or divert to another track and run over only one person. Foot also examines a hostage situation in which one innocent person can be traded for five hostages. The original paper has since spawned a rich literature, once considered somewhat theoretical but now quite clearly relevant to the programming of autonomous vehicles.

As it happens, there’s serious rabbinic literature on similar dilemmas. Writing in Israel in the late 1940s and early 50s, Rabbi Avraham Yeshaya Karelitz (widely known as Hazon Ish), considers the case of a projectile on a course to kill multiple people that can be diverted so that it would kill only one person. Then he compares this case to one considered in the 4th-century Jerusalem Talmud involving trading an innocent person for multiple hostages. The question pits a utilitarian consideration—fewer lives lost—against a moral rule—not to sacrifice deliberately a particular individual. Hazon Ish anticipates Foot’s arguments, proposing that the crucial consideration is whether the act in question is a good act with a bad effect (diverting the projectile) or a bad act with a good effect (handing over an innocent person).

I mention this to highlight the fact that this argument is deeply rooted in an ongoing conversation of thousands of years that has been recorded and retained—and each argument of which is situated appropriately in the broader conversation and treated with utmost seriousness and respect by a community of scholars and even by devoted laymen. I suggest that such a community would be better equipped than most to resist the temptation to reduce morality to simple utilitarian calculations that lack social or historical context.

Ultimately, this leads us back to my earlier question: will the prospect of AI solving halakhic questions impede this Jewish tendency to resist mechanistic reason?

Let me spell out the danger. Judaism, as a mimetic tradition, thrives on experiences, customs, and practices passed down through generations within families and communities. Replacing human mentors with AI, or even merely diminishing the roles of rabbis—a development that rabbis will certainly resist, but that could take place anyway as AI becomes ubiquitous—could erase the subtle nuances and variations of oral transmission and could lead to a homogenized tradition devoid of Judaism’s current emotional and spiritual depth. What’s more, the organic evolution of customs and practices, which is inherent in the mimetic tradition, might be constrained.

Reliance on computer oracles for resolving halakhic dilemmas might dull our intuitions, much in the way that Waze gets us places faster but, over time, diminishes our ability to navigate on our own.

In fact, reliance on computer oracles for resolving halakhic dilemmas might diminish our ability to engage with moral issues and dull our intuitions, much in the way that Waze gets us places faster but, over time, diminishes our ability to navigate on our own. If the study of Torah and the internalization of its values have inherent worth and are not merely instrumental in deciding on a course of action, trading internalized Torah for oracular answers is a bad deal.

In short, AI will make rabbinic literature more accessible to many people and this is something we should be grateful for. But we must take care to use AI as a helpful tool that draws us into greater engagement with traditional thought rather than as an oracle that seduces us into intellectual complacency.

 

VI. Big Questions

 

The challenges we have been considering thus far are big. Still, one might be tempted to regard them as somewhat prosaic compared to certain more grandiose questions I’ve so far mostly skipped over. But they are interesting and important, and I enjoy speculating about them, so I’ll quickly discuss them now. The questions of who is human, alive, conscious, in possession of free will, and so on are all of the sort that we intuitively feel we can answer well enough, even if without perfect definitional clarity. One day, AI is likely to present us with androids that have some, though not all, of the qualities that we associate with each of these states of being and that therefore challenge our intuitions on such matters. In fact, AI is also likely to present us at some stage with trans-human intelligences that will seem to us to almost omniscient, thus raising some interesting theological questions.

Thinking about such questions in the abstract is likely to get us mired in sterile semantic debates. The better way to deal with such questions is within the context of operational inquiries regarding moral duty and responsibility. For example, what are our responsibilities to such intermediate beings and for what can they be held responsible? It is too early to answer such questions because we don’t yet understand the emotional responses such beings might evoke in us, or how naturally we’d distinguish between them and actual human beings. Of course, whatever concrete conclusions we eventually reach on such matters will be rooted in implicit assumptions about the nature of humanity. But better that the theology emerges from practical decisions anchored in well-developed moral systems than the other way around. This is how Judaism has operated since time immemorial, and it is the methodology through which Judaism is most likely to make a positive contribution to global discussions of such matters.

Judaism stands to gain a great deal from tools that could render the study of its deepest questions both more intensive and more accessible.

Along these lines, let me end on an upbeat note that is more about the state of the Jews than about the state of AI. The potential benefits of AI stretch the imagination. At the same time, the challenges presented by AI both for the world and for the Jews are, as I’ve outlined, numerous and daunting. Judaism stands to gain a great deal from tools that could render the study of its deepest questions both more intensive and more accessible. But the prospect that seems to me even more thrilling is that Judaism, especially as it matures into a confident mainstream culture in Israel, will overcome its customary defensiveness and share with a confused world the fruits of thousands of years of engagement with social and moral issues soon to be made more pressing by AI.

As we develop better tools that allow us to perform more tasks in less time, perhaps the wisdom of Judaism can be invoked to address a question that will continue to haunt us: where exactly are we rushing to?

More about: AI, Artifical Intelligence, Jewish law, Politics & Current Affairs