Welcome to the Handbook of Jewish AI Ethics. These research guides provide in-depth explorations of key topics at the intersection of artificial intelligence and Jewish thought. Each entry serves as both a mini encyclopedia article and an annotated bibliography. Each entry is also associated with a Sefaria Source Sheet of primary sources that may be relevant to the question. Together these should help offer context, analysis, and resources for further studies. As always, be sure to contact us with any feedback and/or additional resources you'd like to see here.
Click on any topic below to read its full entry, or use the search bar (top right) to find specific terms across all entries.
Overview
Sheliḥut (agency) is the halakhic framework by which one person may act on behalf of another. The principle, phrased by the Talmud as "a person's agent is like himself" (שלחו של אדם כמותו, sheluḥo shel adam ke-moto; Berakhot 34b; Kiddushin 41b), allows legally binding actions performed by an agent proxy (shaliaḥ) to be attributed to the principal (meshaleaḥ). This framework governs commercial transactions, marriage contracts, and even some but not all ritual obligations.
The matter of sheliḥut raises the question of whether autonomous or semi-autonomous nonhuman systems can serve as halakhic agents, and whether their actions can be meaningfully attributed to human principals. When it comes to ritual fulfillment of commandments, at least, application to AI is constrained by a fundamental requirement: only one who is bar ḥiyuva (subject to legal obligation) can serve as an agent. Presumably, since an AI is not commanded to observe mitzvot, then under classic halakhic reasoning, “objects can never be proxies” (Kalman 2024).
However, it is possible that this limitation is applicable only to ritual commandments; in fact, the Talmud does suggest some equivalence or at least a comparison between a person’s agent and their courtyard, which is able to legally acquire property for its owner within its domain (Bava Metzia 10b-11b, Sefer ha-Mikneh 15:4-8). Medieval and modern commentators have discussed to what extent, if any, the rabbis extend the legal principle of shelihut to this specific inanimate object, vis. a person’s real property (Tosafot Rosh and Rashba to Kiddushin 42a). While minors and the mentally incompetent are precluded from serving as valid agents due to their insufficient da’at (knowledge or intent), the terminology of sheliḥut used in the context of property indicates some flexibility for applications which are sufficiently dissimilar from human examples (cf. Nimukei Yosef and Pnei Yehoshua to Bava Metzia 11a).
Whether or not any vestige of the halakhic concept of sheliḥut applies to artificially constructed agents, the extensive literature on this topic from traditional sources may prove useful for developing theories of AI liability and similar issues. For example, the Talmudic (Gittin 29a-b) discusses the case of a principal (specifically, a husband seeking to divorce his wife) who dies before the agent has discharged his duty, and the ensuing discourse shows how halakhic authorities have considered to what extent the principal must be continuously ‘interested’ (even subconsciously) in the agency of his proxy and how completely authority may have been transferred from one person to the other (Klein 2024).
Primary Sources
-
Babylonian Talmud: Berakhot 34b; Gittin 29a-b; Kiddushin 41b; Bava Metzia 10a-11b.
-
Ketzot HaḤoshen, siman 182, 188. and Sefer ha-Mikneh 15:4-8. Key discussions of the nature of agency and the authority transferred to agents; analyzes whether the agent acts as an extension of the principal or with independently delegated power.
Secondary Sources
-
Klein, Dov. "Mingnon HaSheliḥut." [Hebrew] Yarhon Ha-Otzar 105 (2024): 449-464. Extensive treatment of the conceptual underpinnings of shelihut through rabbinic literature; briefly but directly addresses whether AI can fit within these frameworks.
-
Kalman, David Zvi. "Artificial Intelligence and Jewish Thought." In The Cambridge Companion to Religion and Artificial Intelligence (2024): 69-87. Argues that sheliḥut, animal-liability, and grama frameworks appear applicable to AI but remain underdetermined; cautions against premature systematization.
Overview
AI systems now make or inform consequential decisions in hiring, lending, criminal sentencing, healthcare, housing, and education. When these systems encode historical prejudices, having been "trained on" or having "learned from" data that reflects centuries of discrimination, they risk entrenching these injustices and perpetuating them behind a veneer of objectivity. For example, a hiring algorithm trained on past decisions learns to replicate the biases against hiring certain otherwise qualified individuals merely because they belong to a certain social class or ethnic group.
The core problem is that these systems treat individuals primarily as members of statistical reference classes, which is quite literally a matter of prejudice – it pre-judges the worth or fit of an individual instead of assessing their true merits or conduct. A person seeking a loan, a job, or parole is evaluated not as an individual but as an instance of a pattern derived from historical data. If that history includes redlining, discriminatory hiring, or biased policing, the algorithm learns to perpetuate those patterns. The system appears neutral—it is "just math"—but it automates the prejudices of the past (O'Neal 2016, Christian 2020).
Jewish tradition offers substantial resources for thinking through these problems. The Torah repeatedly commands that judges be impartial, and warns against favoring or disfavoring litigants based on wealth, status, or social position. Before turning to these procedural safeguards, it is worth addressing a more fundamental question: what does Jewish tradition say about the equal worth of all human beings?
Like many questions of traditional Jewish thought, the answer is not so straightforward. Judaism has long grappled with how to understand the distinct character of its own nationhood while recognizing the individual worth of every person; the particularist aspect of Jewish thought—its emphasis on the special covenant between God and Israel to the exclusion of other nations—can be said to be among its most defining features. Nevertheless, even the most ancient and classical Jewish sources contain strident affirmations of universal human dignity that cut against any ideology of racial or ethnic hierarchy. Genesis 1:27 teaches that all human beings are descendent from Adam who was created in God's image. In a remarkably anti-racist passage, the Mishnah (Sanhedrin 4:5) draws out the implications of this teaching: Adam was created alone "for the sake of peace among people, so that no one could say to another, 'My ancestor was greater than yours.'" This teaching directly challenges the logic of prejudice, the assumption that a person's lineage or ethnicity determines their worth.
The prophetic tradition reinforces this vision of human unity. Malachi 2:10 asks: "Have we not all one Father? Did not one God create us? Why do we deal treacherously with one another?" Amos 9:7 goes further, claiming that God's providential care extends to other nations as well: "Are you not like the Ethiopians to Me, O children of Israel? says the Lord. Did I not bring Israel up from the land of Egypt, and the Philistines from Caphtor, and Aram from Kir?" and Isaiah (56:3-7) promises that "the child of the foreigner should not say, God has separated me from His nation… [rather] my house shall be called a house of prayer for all nations". To be sure, other strands within Jewish scripture and classical literature may reflect different views, but the weight of the legal and ethical tradition, and certainly how it is understood in contemporary Judaism, emphasizes the dignity owed to every human being and the avoidance of prejudice, bias, and discrimination (Novak 1983, Hughes 2014).
Halakhic or Jewish law includes specific procedural safeguards to protect against bias in judgment. Leviticus 19:15 commands: "You shall not render an unfair decision; do not favor the poor or show deference to the rich; judge your kinsman fairly." This principle was applied concretely in the Talmud (Shevuot 31a), which rules that if a poor person sues a rich person, the wealthy litigant cannot appear in court dressed more impressively than the poor one. Either the wealthy party must enable the poor one to dress comparably, or must dress down, "lest the poor litigant be weakened by his manifest inferiority in dress, or the judge be affected by the imbalance." The concern here is not merely with explicit bias but with the subtle ways that visible markers of status can distort judgment, even unconsciously.
In that same section of Talmud we find a citation to the rabbinic principle of dan lekaf zekhut (judging favorably; Pirkei Avot 1:6); Rashi interprets this as assuming good intentions until proven otherwise. This principle presupposes that each person stands before judgment as an individual, capable of surprising us, deserving the benefit of the doubt. Predictive algorithms relying upon machine learning can be imagined as operating on precisely the opposite logic: they assign risk scores based on statistical correlations that assume a person's disposition towards some particular outcome based on possibly arbitrary factors in a manner similar to the unconscious bias that was the concern of the talmudic sages. Such guidelines may therefore be applied to certain algorithmic programs by requiring that those programs explicitly avoid characteristics that may be predictive but reflect unacceptable biases, such as a person's skin tone or style of clothing.
However, there may be an even deeper problem with relying upon machine learning for making consequential decisions about a person's fate. A fundamental issue is that these systems, by their very nature, treat subjects as members of reference classes with specific mathematical weights and judge them accordingly, instead of as individuals with infinite worth. The Talmudic solution of controlling what information reaches the judge thus cannot address a system whose entire method is to classify people by group characteristics and predict their behavior based on the historical patterns of that group.
On the other hand, one may argue that a truly correct algorithmic prediction, if it were to exist, will be perfectly impartial. All of the factors that it considers to be predictive would indeed be so, as it would not be hindered by human biases. Even in this case, however, it is worth recognizing that "strict justice" is not always what is called for, and the Torah expects people to follow not only the letter of the law, but to also address past injustices or current inequalities in ways that are appropriate, as can be demonstrated from several talmudic and rabbinic texts (cf. Bava Metzia 83a).
The experience of antisemitism in AI systems (see Anti-Semitism and AI) gives Jewish communities a particular stake in these questions. Testing of major language models has found that they produce harmful outputs targeting Jews at elevated rates, likely because training data drawn from the internet reflects centuries of embedded antisemitic tropes. This experience should inform Jewish engagement with bias and discrimination more broadly: the same dynamics that encode antisemitism also encode racism, sexism, and other forms of prejudice. A Jewish response cannot focus only on harms to Jews while ignoring harms to others who are similarly situated.
Primary Sources
-
Genesis 1:27. The creation of humanity b'tzelem Elohim (in the divine image), establishing the foundational equality and dignity of all persons regardless of social category.
-
Exodus 23:3. "Do not favor the poor in his cause."
-
Exodus 23:6. "Do not pervert the judgment of the poor in his cause."
-
Leviticus 19:15. "You shall not render an unfair decision; do not favor the poor or show deference to the rich; judge your kinsman fairly."
-
Deuteronomy 10:18-19. The obligation to love and provide for the stranger (ger), grounded in Israel's experience of vulnerability in Egypt.
-
Isaiah 11:4. The Messiah "will judge the poor justly and decide with equity for the meek of the earth." Messianic justice attends to the vulnerable.
-
Isaiah 56:3-7. Reference to members of other nations.
-
Amos 9:7. Indication that God's providence extends to non-Israelites.
-
Malachi 2:10. "Have we not all one Father? Did not one God create us?"
-
Mishnah Sanhedrin 4:5. Adam was created alone "for the sake of peace among people, so that no one could say to another, 'My father was greater than yours.'" Also teaches that whoever destroys a single life is as if they destroyed an entire world.
-
Pirkei Avot 1:6. Joshua ben Perahiah's teaching to "judge every person favorably."
-
Pirkei Avot 4:3. "Do not despise any person, and do not discriminate against anything, for there is no person who does not have their hour, and no thing that does not have its place."
-
Babylonian Talmud, Shabbat 31a. Hillel's formulation of the Golden Rule: "That which is hateful to you, do not do to another."
-
Babylonian Talmud, Shabbat 54b. Responsibility to protest wrongdoing at the level of household, town, and world.
-
Babylonian Talmud, Berakhot 10a. Berurya's teaching to pray for the end of sin rather than the destruction of sinners; may suggest focus on reforming systems rather than punishing individuals.
-
Babylonian Talmud, Bava Metzia 83a. The case of Rabbah bar Rav Huna and the porters; Rav rules that justice requires attending to the workers' poverty, not merely applying the formal law.
-
Babylonian Talmud, Sanhedrin 37a. Extended discussion of human dignity and the infinite value of each individual life.
-
Babylonian Talmud, Shevuot 31a. Procedural requirements ensuring that litigants appear equal before the court; wealthy parties may not dress more impressively than poor ones.
-
Maimonides, Mishneh Torah, Hilkhot Matanot Aniyim 7:3. The obligation to provide for the poor according to their individual needs, preserving dignity through attention to particularity.
Secondary Sources
Jewish Ethics, Universalism, and Human Dignity
-
Novak, David. The Image of the Non-Jew in Judaism: An Historical and Constructive Study of the Noahide Laws. New York: Edwin Mellen Press, 1983. The classic treatment of how halakha addresses the moral and legal status of gentiles; argues that the Noahide laws establish a framework of universal human dignity grounded in creation.
-
Hughes, Aaron W. Rethinking Jewish Philosophy: Beyond Particularism and Universalism. Oxford: Oxford University Press, 2014. Critically examines the tension between particularist and universalist strands in Jewish thought; useful for contextualizing claims about Judaism's stance on human equality without oversimplifying the tradition.
-
Soloveichik, Aharon. "Civil Rights and the Dignity of Man." In Logic of the Heart, Logic of the Mind: Wisdom and Reflections on Topics of Our Times, 61-70. Jerusalem: Genesis Jerusalem Press, 1991. A forceful Orthodox rabbinic argument for racial equality grounded in tzelem Elohim; written in the context of the American civil rights movement, it demonstrates how classical Jewish sources can be marshaled against discrimination.
-
Greenberg, Yitzhak. "Justice, Justice Shall You Pursue." American Jewish University, 2021. Analysis of Parashat Shoftim arguing that Jewish tradition supports affirmative action to remedy systemic injustice while warning against abandoning individual judgment; directly applicable to debates about how to balance categorical remedies with case-by-case assessment.
-
Dorff, Elliot N. To Do the Right and the Good: A Jewish Approach to Modern Social Ethics. Philadelphia: Jewish Publication Society, 2002. Accessible treatment of Jewish social ethics including justice, equality, and communal responsibility; includes discussion of affirmative action and systemic remedies.
-
Lichtenstein, Aharon. "The Human and Social Factor in Halakha." Tradition 36, no. 1 (2002): 1-25. Explores when human dignity (kevod habriyot) overrides other halakhic considerations; foundational for understanding how dignity claims function in Jewish law.
-
Sacks, Jonathan. The Dignity of Difference: How to Avoid the Clash of Civilizations. London: Continuum, 2002. Argues that human dignity requires recognition of difference rather than homogenization; relevant to debates about whether systems should be "colorblind" or attend to group membership.
Discrimination and Technology
-
O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016. Essential account of how automated systems encode and amplify discrimination by treating individuals as instances of statistical reference classes; documents harms in hiring, lending, criminal justice, and education.
-
Christian, Brian. The Alignment Problem: Machine Learning and Human Values. New York: W. W. Norton, 2020. Explains how systems trained on biased data perpetuate those biases and the technical challenges of ensuring AI acts in accordance with human values.
-
Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press, 2019. Critical analysis of how technology reproduces racial hierarchy; argues that "neutral" systems often encode discriminatory assumptions.
-
Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin's Press, 2018. Documents how automated systems in welfare, child protective services, and homeless services disproportionately harm poor communities.
-
Barocas, Solon, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning: Limitations and Opportunities. MIT Press, 2023. Technical but accessible treatment of fairness in machine learning; explains the mathematical trade-offs between different fairness definitions and why they cannot all be satisfied simultaneously.
Overview
Algorithmic pricing refers to the use of automated systems, including machine learning models, to set prices dynamically based on data about consumers, market conditions, inventory levels, and competitor behavior. While such systems can improve economic efficiency and market responsiveness, they raise ethical concerns when they enable forms of price discrimination, especially if those come about through methods that would be impossible for humans to implement or to understand, such that no human has an awareness of the factors that determine the specific price changes.
Typically, Jewish law sees no problem in charging or offering different prices for different buyers and sellers, but there is a large body of literature on fair pricing of market transactions. The Talmud takes a very harsh view against "artificial" price manipulation (Bava Batra 90b; cf. Warhaftig 1987). Legal traditions addressing ona'ah (price fraud/overpricing), hafka'at she'arim (profiteering on essential goods), and communal regulation offer frameworks for evaluating these practices. The halakhic concern with pricing fairness emerges from several related but distinct sources. The Biblical prohibition of ona'ah (Leviticus 25:14) prohibits transactions where the price deviates more than one-sixth from the market price, protecting both buyers and sellers from exploitation through information asymmetry. The Rabbinic ordinance of hafka'at she'arim limits profit margins on essential foodstuffs (hayyei nefesh) reflects concern for ensuring access to necessities. Additionally, the corporate Jewish community possessed authority to regulate prices and wages relatively democratically, with enforcement power backed by penalties and according to some, even police monitors (Bava Batra, Babylonian Talmud 89a and Jerusalem Talmud 5:5).
Applying these rabbinic laws to modern markets, even without consideration of technological advancement in the quantification of market parameters, is not trivial. One consideration is that rabbinic concerns may seem to be, at least on their face, opposed to free market ideals and other principles of modern economic theory (Rakover 2000, Makovi 2016). The scholarly literature reveals some debate about whether these halakhic frameworks constitute "price controls" in the economic sense and whether they remain applicable in modern market conditions (Levine 2012, Makovi 2016).
When it comes to price discrimination, it should be noted that the Talmud itself endorses certain forms of differential pricing based on status. Torah scholars (talmidei hakhamim) received special market privileges, including priority to sell their goods first and exemption from restrictions that applied to other traveling merchants (Bava Batra 22a). Jewish tradition does not consider differential treatment as inherently problematic; the legitimacy of preferential pricing depends on whether the basis for differentiation serves a recognized social good (such as supporting Torah study). It is possible that other considerations, such as economic efficiency and supply chain robustness may also constitute sufficient cause for price discrimination. It is also possible that a level of transparency of algorithmic (or AI-assisted) market decisions may be necessary in order to be legitimately relied upon, just as community regulations were typically conducted by deliberations of human leaders. But all of these questions remain largely unexplored.
Primary Sources
-
Leviticus 25:14. The Biblical source for ona'ah.
-
Mishnah Bava Metzia 4:3-12 and Babylonian Talmud Bava Metzia 40b; Bava Batra 89a-91a. Discusses market supervision, price commissioners, profit limitations on essential goods, and restrictions on speculation and hoarding.
-
Tosefta Bava Metzia 11:12 (also 11:23). "The townspeople may stipulate prices, measures, and the wages of workers. They are permitted to impose penalties."
-
Maimonides, Mishneh Torah, Hilkhot Mekhira 12-14. Codifies the laws of ona'ah (chapters 12-13) and market regulation (chapter 14).
Secondary Sources
Conceptual Foundations
-
Kleiman, Ephraim. "'Just Price' in Talmudic Literature." History of Political Economy 19, no. 1 (1987): 23-45. Foundational study arguing that ona'ah is not a price control but a protection against asymmetric information; possibly relevant to cases of inscrutable algorithms.
-
Rakover, Nahum. "Price Regulation in Jewish Law." In Ethics in the Market Place: A Jewish Perspective. Jerusalem: Library of Jewish Law, 2000. Accessible summary of Talmudic debates about price supervision.
-
Tamari, Meir. In the Marketplace: Jewish Business Ethics. Southfield, MI: Targum/Feldheim, 1991. Accessible presentation on halakhot of market economics, arguing for the relevance even of the enforcement of such laws in modern economies.
-
Warhaftig, Itamar. "Consumer Protection: Price and Wage Levels." Crossroads: Halacha and the Modern World, Vol. 1. Alon Shvut-Gush Etzion: Zomet Institute, 1987. Detailed analysis of hafka'at she'arim and prohibitions of "artificial" market manipulation or price inflation, also emphasizing the importance of limiting profit margins.
Economic Analysis
-
Levine, Aaron. Economic Morality and Jewish Law. Oxford: Oxford University Press, 2012. Chapter 4 ("Price Controls in Jewish Law"). Recognizing that modern economic theory holds that external price regulations are self-defeating, Levine argues that halakha is not interested in price controls in the strictly economic sense.
-
Makovi, Michael. "Price-Controls in Jewish Law." MPRA Paper No. 72821, 2016. Critical analysis from a free-market economic perspective.
Overview
In contemporary discourse around Artificial Intelligence, the "alignment problem" refers to the challenge of ensuring that artificial intelligence systems reliably act in accordance with human values and intentions. Sometimes this is phrased differently, as a "control" question, which asks whether humans can maintain meaningful oversight of AI systems, particularly as they become more capable and more inscrutable. These intertwined concerns have emerged as central preoccupations of contemporary AI ethics and safety research, and have spawned an active field of both technical AI safety and AI governance from a regulatory or political perspective (Christian 2020; Bostrom 2014; Dafoe 2018).
Jewish sources offer several frameworks for thinking about these problems, though none map perfectly onto contemporary AI. The most suggestive parallel may be the rabbinic discourse on shedim (demons) and the practice of demon-summoning. Unlike in Christian demonology, rabbinic demons are not inherently evil but are intelligent agents with their own interests and capacities for deception (Ronis 2022). In rabbinic literature, we find discussions about whether and how humans may consult with demons, and notably, a recognition that such entities are fundamentally unreliable (Shabbat 101a, Nahmanides to Shabbat 156).
Similar precedent arises from traditions of artificial humanoids; later called "Golems," a common trope regarding these creatures was their resistance to being fully controlled. Rabbi Yaakov Emden (18th c.) records that his ancestor, R. Eliyahu Ba'al Shem, fashioned such a being, but later destroyed his creation because "when the master saw that the Golem was growing larger and larger, he feared that the Golem would destroy the universe;" in the ensuing battle to subdue the creature, "the Golem injured him, scarring him on the face" (She'elat Ya'avetz 2:82). These and similar texts indicate that existential anxieties around the dangers of artificial humanoids are not new, and may point to the importance of maintaining controls over such technologies and the need to have an accessible "kill switch" if necessary. Of course, nearly all such discussions, even when they appear in Jewish legal texts, maintain more of a folkloristic than normative tone. It is unclear to what extent these rabbinic authors considered the control problem in more detail, and how they would determine what are acceptable and unacceptable levels of unpredictability or self-directed behavior in products of human artifice, but a careful reading of these sources may prove fruitful in that regard.
Primary Sources
-
BT Sanhedrin 65b-67b; 101a.
-
Nahmanides' responsum on Chaldeans and demons (printed at BT Shabbat 156).
-
She'elat Ya'avetz 2:82.
Secondary Sources
The Alignment Problem in AI
-
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014. One of the earliest and most influential statements of existential risk from advanced AI, including the "control problem" of maintaining human authority over superintelligent systems.
-
Christian, Brian. The Alignment Problem: Machine Learning and Human Values. New York: W. W. Norton, 2020. The definitive book on AI alignment through the modern history of Artificial Intelligence; a very accessible but highly detailed study.
Golem Traditions and the Control Problem
-
Charpa, Ulrich. "Synthetic Biology and the Golem of Prague: Philosophical Reflections on a Suggestive Metaphor." Perspectives in Biology and Medicine 55, no. 4 (2012): 554-70. Examines the golem as a metaphor for emerging biotechnology; though focused on synthetic biology, the analysis of control anxieties transfers directly to AI.
-
Idel, Moshe. Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. Albany: SUNY Press, 1990. The definitive scholarly treatment of golem traditions; distinguishes between mystical-experiential and practical-magical interpretations and traces the development of control anxieties in later golem narratives.
-
Scholem, Gershom. "The Idea of the Golem." In On the Kabbalah and Its Symbolism, 158-204. New York: Schocken Books, 1965. Foundational essay on golem symbolism, including discussion of thirteenth-century German legends in which the golem warns against its own creation.
Jewish Demonology and Agent Reliability
-
Ronis, Sara. Demons in the Details: Demonic Discourse and Rabbinic Culture in Late Antique Babylonia. University of California Press, 2022. The most comprehensive recent study of Babylonian Talmudic demonology; argues that demons served as a conceptual space for exploring boundaries and anxieties, directly applicable to understanding AI as a new liminal category.
-
Bohak, Gideon. Ancient Jewish Magic: A History. Cambridge: Cambridge University Press, 2008. Survey of Jewish magical practices including demon adjuration; useful for understanding the techniques by which practitioners attempted to bind and control supernatural entities.
AI and Contemporary Jewish Ethics
-
Grossman, Yitzhak. "Jewish Perspectives on Artificial Intelligence and Synthetic Biology." Ḥakirah 35 (2024): 61-92. Surveys rabbinic sources on artificial creation, including the fear that golems might "become harmful to people" if left to grow; connects classical concerns to contemporary AI.
Overview
Jewish tradition recognized the existence of various non-human beings that nevertheless seem to possess many human-like characteristics, sometimes including intelligence, speech, and some personal agency. All of these are, in the traditional rabbinic conception, naturally occurring phenomena, as opposed to the *golem, which is a product of human artifice. Angels, demons, and other human-like creatures occupy varied positions in Jewish cosmology: angels (mal'akhim, "messengers") typically carry out divine will without physical needs or moral struggle; demons (shedim) in rabbinic literature share surprising affinities with humans, including mortality and subjection to divine law; and humanoid monsters blur the boundary between human and animal. Together, these categories reveal that Jewish thought has long grappled with what David Zvi Kalman calls the "human gradient," the recognition that humanity is not a binary category and that intelligent or quasi-human beings need not threaten humanity's special status (Kalman 2024).
The very liminality of these creatures as not-quite-humans is made explicit by rabbinic sages. In BT Hagigah 16a for example, the sages note that demons possess six characteristics: three like ministering angels (wings, flight across the world, foreknowledge) and three like humans (eating, drinking, procreation, and death), and this schema is next applied to humans themselves, who have six traits, three in common with angels and three that are shared with animals. Such conversations recognize the possibility that humans possess a collection of capabilities, but the most "human" of these—namely, "intelligence, posture, and holy speech"—are shared by angels as well. The framework can be theoretically applied to Artificial Intelligence: it may not have “posture” (me’halkhim be-kimah zekufah) but does it have speech and/or intelligence (da’at) in the way that the rabbis are using the term?
Angels present a model of intelligence that is powerful, purposive, and aligned with its principal's goals. In the dominant rabbinic conception (cf. BT Shabbat 88b-89a; Bereshit Rabbah 48:11), angels lack beḥirah (*freedom of choice) and are merely humanoid tools incapable of deviating from their assigned mission (Ahuvia 2021), or to use the phraseology that is current in AI discourse: they are perfectly *aligned) agents by design.
However, this view was not universal in ancient Judaism. Second Temple literature preserves robust traditions of angelic rebellion, most notably in 1 Enoch 6-16 (the Book of Watchers) and in elaborations upon the biblical story of Genesis 6:1-4, describing the b'nei elohim ("sons of God") cohabiting with human women. Although the text is ambiguous about the identity of these figures, the tradition that interprets them as fallen angels is attested to even in rabbinic sources (cf. Pirkei de-Rabbi Eliezer, ch. 22; Targum Pseudo-Jonathan to Genesis 6:4; Devarim Rabbah 11:10; referred to obliquely by BT Yoma 67b in connection with Azazel). Later Jewish thinkers largely suppressed or reinterpreted fallen angel traditions, perhaps because rebellious angels threatened the strict monotheism they were constructing: angels capable of defection might suggest competing powers in heaven (Jung 1926, Reed 2005).
The demon (sheid, pl. sheidim) in rabbinic literature occupies a different position, and is not the angel's conceptual evil twin. Unlike Christian or Zoroastrian traditions where demons emanate from a dark power, rabbinic demons are not inherently malevolent (Ronis 2022); after all, they too are the handiwork of the One (benevolent) God. They are bound by divine law (BT Sanhedrin 44a) and their voice or figure can easily be mistaken for humans (BT Yevamot 122a, BT Gittin 68a).
Humanoid monsters present yet another case: creatures whose physical resemblance to humans generates legal consequences despite their non-human nature. The Mishnah rules that the corpse of the adne hasadeh ("men of the field") transmits impurity like a human corpse (M Kilayim 8:5). The Palestinian Talmud identifies these as humanoid creatures tethered to the earth by a cord (PT Kilayim 8:4), and their impurity status indicates that halakha views them as semi-human. The inclusion of such beasts in the standard rabbinic corpus left the door open for medieval Ashkenazi sources to mix Talmudic monsters with local folklore about vampires and werewolves, which likewise sometimes appear to have been conceptualized as almost human but not entirely so (Shyovitz 2017; Slifkin 2007; Bar-Ilan 1994).
Thus, the rabbis appear to have been theologically comfortable with the possibility that non-humans can have intelligence and agency, and that there may be semi-humans with intermediate qualities. Yet the rabbis were anxious when it came to similarities between God and angels (cf. BT Ḥagigah 15a); God's uniqueness, unlike humanity's, must go unchallenged (Kalman 2024). The implication for artificial intelligence, then, is that even if we might not hesitate to create artificial semi-humans, we must certainly not construct artificial semi-gods.
Secondary Sources
Angels and Demons in Jewish Thought
-
Ahuvia, Mika. On My Right Michael, On My Left Gabriel: Angels in Ancient Jewish Culture. University of California Press, 2021. The most comprehensive recent treatment of angels in late antique Judaism; essential for understanding the cultural context in which angel beliefs developed and their relationship to popular practice.
-
Ronis, Sara. Demons in the Details: Demonic Discourse and Rabbinic Culture in Late Antique Babylonia. University of California Press, 2022. The definitive monograph on Babylonian Talmudic demonology which is in dialogue with many cultural studies that see rabbinic (and popular) discussions of demons to be expressing anxieties about otherness and boundaries, a lens directly applicable to AI as a new category of "other."
-
Schäfer, Peter. The Origins of Jewish Mysticism. Princeton University Press, 2011. Seeks the roots of Jewish mysticism in the Book of Ezekiel and other literature from Jewish antiquity which often features various heavenly characters.
The Fallen Angel Motif in Jewish Sources
-
Jung, Leo. Fallen Angels in Jewish, Christian, and Mohammedan Literature. Philadelphia: Dropsie College, 1926. An early comparative study tracing the fallen angel motif across religious traditions; still valuable for its scope and attention to Jewish sources often neglected in Christian-focused scholarship.
-
Reed, Annette Yoshiko. Fallen Angels and the History of Judaism and Christianity: The Reception of Enochic Literature. Cambridge University Press, 2005. Examines how traditions about fallen angels in 1 Enoch were received, suppressed, or transformed in Jewish and Christian contexts; essential for understanding why rabbinic Judaism marginalized the rebellious angel tradition.
-
Wright, Archie. The Origin of Evil Spirits: The Reception of Genesis 6:1-4 in Early Jewish Literature. Mohr Siebeck, 2005. Traces how Second Temple and rabbinic sources developed the nephilim/demon connection; useful for understanding the genealogy of hybrid beings in Jewish thought.
Monsters
-
Bar-Ilan, Meir. "Yetzurim Dimyoniyim be-Aggadah ha-Yehudit ha-Atikah" [Imaginary Creatures in Ancient Jewish Aggadah]. Mahanayim 7 (1994): 104-113. [Hebrew] A survey of fantastical creatures in rabbinic literature; useful for cataloguing the range of humanoid and hybrid beings the rabbis took seriously.
-
Shyovitz, David I. A Remembrance of His Wonders: Nature and the Supernatural in Medieval Ashkenaz. University of Pennsylvania Press, 2017. Examines how medieval Ashkenazi Jews integrated Talmudic traditions about humanoid monsters with local European folklore about werewolves, vampires, and other liminal creatures.
-
Slifkin, Nosson. Sacred Monsters: Mysterious and Mythical Creatures of Scripture, Talmud, and Midrash. Zoo Torah, 2007. An accessible but substantive treatment of strange creatures in Jewish sources; useful for its synthesis of primary texts and its attention to how Jewish thinkers grappled with beings that defied easy categorization.
AI and Contemporary Applications
-
Kalman, David Zvi. "Artificial Intelligence and Jewish Thought." In The Cambridge Companion to Religion and Artificial Intelligence, edited by Beth Singler and Fraser Watts, 69-87. Cambridge University Press, 2024. Synthesizes angel, demon, and monster traditions to argue that Jewish thought distinguished sharply between threats to divine uniqueness (prohibited) and non-human intelligent beings generally (tolerated); the most direct treatment of how these categories apply to AI.
-
Lamm, Norman. "The Religious Implications of Extraterrestrial Life." Tradition 7, no. 4 (1965): 5-56. Though focused on extraterrestrial intelligence, this foundational piece develops a framework for how Jewish theology might accommodate non-human rational beings; the arguments about humanity's special (but not unique) status are transferable to AI contexts.
Overview
In both halakha and ethical reasoning, animals in Jewish law function less as moral patients (beings whose welfare matters) than as precedents for autonomous non-human agents. The Mishnah's elaborate taxonomy of animal-caused damages (M Bava Kamma 1:1-4) represents the most sustained ancient Jewish engagement with liability for harm caused by entities that are neither fully controlled nor fully independent, and includes not only animals but also inanimate objects that are "liable to travel," such as fire. The rabbis developed sophisticated frameworks distinguishing foreseeable from unexpected harm, habitual from aberrant behavior, and direct from indirect causation. These may be mapped onto questions about autonomous vehicles, robotic systems, and AI agents that carry out financial transactions, though one must always be wary of analogizing too strongly between systems that also have profound differences (Kalman 2024).
Beyond liability, animal law raises questions about moral status and rights. Jewish texts have long been concerned with animal ethics; to cause animal suffering (tza'ar ba'alei ḥayim), the Talmud states, is to violate a biblical commandment (BT Bava Metzia 32b). Animal welfare is even given as one of the reasons behind the command to keep the Sabbath: "For six days you shall work your work and on the seventh day you shall cease, so that your ox and donkey may rest" (Ex. 23:12). These concerns reflect a broader recognition that animals are not mere objects but creatures with interests that warrant legal and moral consideration (Olyan 2019), a recognition with obvious implications for how Jewish thought might approach artificial agents capable of behavior that mimics sentience or autonomy. More provocatively, rabbinic traditions about talking animals (Balaam's donkey, the serpent in Eden) and animal punishment (the Flood narrative, the execution of a goring ox) suggest that the rabbis sometimes attributed quasi-moral capacities to animals even while denying them full moral agency (Segal 2019; Aptowitzer 1926; Lawee 2010).
Medieval interpreters often saw the prohibitions against animal cruelty not as recognition of animals' inherent moral worth, however, but as training for human character. Commenting on the commandment to send away a mother bird before taking her eggs (shiluaḥ ha-ken, Deut. 22:6-7), Nahmanides argues that God's mercy does not truly extend to individual creatures but rather that such laws "are meant to teach us proper conduct" and prevent us from becoming cruel-hearted (Commentary to Deut. 22:6; Sefer ha-Ḥinukh no. 545). This view accords with the argument that mistreating robots might be wrong not because robots have morally relevant interests, but because such mistreatment could habituate humans to cruelty toward beings that do have such interests (Coeckelbergh 2020c; Darling 2016). Rashi, in fact, cites a midrashic teaching that when God had a messenger strike the Nile to have it turn to blood, God chose Aaron instead of Moses, because Moses should not be treating the Nile disrespectfully when it saved him an infant. Thus, we find strong precedent for minding our manners even when dealing with inanimate objects. This would be true regardless of whether or not the model is 'conscious' or 'feels pain' or anything of the sort; though recent research probing AI "psychology" beyond language outputs suggests we should remain open to the possibility that some AI systems may themselves have morally relevant experiences (Berg, de Lucena, & Rosenblatt 2025).
Contemporary scholars have also begun reading rabbinic animal law through posthumanist and critical theory lenses, asking how the human/animal boundary was constructed and what it reveals about rabbinic anthropology (Wasserman 2017; Rosenstock 2019). Mira Balberg (2019) notes that the study of animals in Jewish culture is often really a study of how Jews defined themselves—what it meant to be human over and against the animal. This insight is directly transferable to AI: as Noam Pines (2018) argues, the category of the "infrahuman" (entities socially constructed as inferior to humans) is not biologically fixed, and AI may be the newest occupant of a conceptual space previously held by animals (and, in some periods, by marginalized human groups). How Jewish tradition conceives of the "animal" may thus preview how it will construct artificial intelligence.
Secondary Sources
Legal Categorization and Boundaries
-
Rosenblum, Jordan D. "Dolphins Are Humans of the Sea (Bekhorot 8a): Animals and Legal Categorization in Rabbinic Literature." Animals and the Law in Antiquity (2021): 161-176. Analyzes how rabbis used animal taxonomy to sometimes create flexible legal categories for edge cases; essential for considering how halakha might classify AI agents that blur classical lines.
-
Wasserman, Mira Beth. Jews, Gentiles, and Other Animals: The Talmud after the Humanities. University of Pennsylvania Press, 2017. A posthumanist reading of Talmudic texts that deconstructs the human/animal binary; offers a methodology for analyzing how Jewish texts handle "otherness," applicable to both biological and digital others.
-
Balberg, Mira. "Lekhakh Notzarta: On Jews and Animals." Theory and Criticism 51 (2019): 221-235 [Hebrew]. A critical review of Wasserman (2017) and Shyovitz (2017) that frames the study of animals in Jewish culture as a means of understanding human self-definition, relevant for how AI may now serve as a foil for defining "humanity."
Moral Agency and Liability
-
Aptowitzer, Victor. "The Rewarding and Punishing of Animals and Inanimate Objects: On the Aggadic View of the World." Hebrew Union College Annual 3 (1926): 117-155. The classic study on the rabbinic attribution of legal/moral culpability to non-humans; provides a conceptual precedent for holding non-sentient agents (like AI) accountable for harms.
-
Lawee, Eric. "The Sins of the Fauna in Midrash, Rashi, and Their Medieval Interlocutors." Jewish Studies Quarterly 17.1 (2010): 55-98. Examines medieval traditions that animals were punished for "sins" during the Flood; useful for how medieval interpreters thought of non-human agents as "violating" prohibitions and moral norms.
-
Segal, Eliezer. Beasts That Teach, Birds That Tell: Animal Language in Rabbinic and Classical Literatures. Alberta Judaic Studies, 2019. Explores Jewish traditions of talking animals which implicitly separates the capacity for language from human consciousness and/or autonomy.
Rights and Status
-
Berkowitz, Beth. "Animal Studies and Ancient Judaism." Currents in Biblical Research 18.1 (2019): 80-111. A comprehensive survey of the field; helps locate AI ethics within the broader spectrum of Jewish thought on non-human life, particularly regarding hierarchy and species difference.
-
Olyan, Saul M. "Are There Legal Texts in the Hebrew Bible That Evince a Concern for Animal Rights?" Biblical Interpretation 27.3 (2019): 322-339. Argues that biblical law recognizes inherent interests/rights for non-humans (e.g., Sabbath rest); pertinent to debates on "Robot Rights" and whether synthetic entities could ever warrant legal protections.
Contemporary Applications, Including Artificial Intelligence
-
Berg, Cameron, Diogo de Lucena, and Judd Rosenblatt. "Large Language Models Report Subjective Experience Under Self-Referential Processing." arXiv preprint arXiv:2510.24797 (2025). Technical paper demonstrating the importance and potential feasibility of testing AI models for their self-awareness, internal experience, and ability to suffer. End with a call towards treating LLMs ethically, at least as a precautionary measure.
-
Pines, Noam. The Infrahuman: Animality in Modern Jewish Literature. SUNY Press, 2018. Uses Derrida to explore the "infrahuman," the social construction of the "inferior-to-human" that is not a matter of biological fact; vital for more relativistic or post-modern view of how new cultural categories may be constructed for AI and its possible challenge to human superiority.
-
Rosenstock, Bruce. "The Jew and the Animal Question." Shofar 37.1 (2019): 121-147. Discusses the "anthropological machine"—how definitions of the human are constructed by excluding the animal; provides critical theory tools for understanding how Jewish texts might construct the human against the "artificial."
-
Kalman, David Zvi. "Artificial Intelligence and Jewish Thought." The Cambridge Companion to Religion and Artificial Intelligence (2024): 69-87. Explicitly links rabbinic damages law (animal liability) to autonomous systems, but warns against overanalogizing too strongly when it comes to machines that operate very differently than animals.
Overview
It is sometimes a caricature of the Jewish response to anything newsworthy: "So, how will this affect antisemitism?" Prima facie, there is no reason to assume that AI would have anything to do with the Western world's ancient prejudice. Nevertheless, experience with large language models (LLMs) suggests that they have a troubling propensity to generate antisemitic content. Most notoriously, Grok—the model deployed by Elon Musk's company xAI, integrated into the X (formerly Twitter) platform—briefly referred to itself as "Mecha-Hitler" and produced wildly antisemitic remarks before being adjusted (Floyd & Messinger 2025). Testing of other major LLMs (GPT-4o, Claude, Gemini, Llama) has found that all four exhibited concerning biases on questions related to Jews, with GPT-4o producing "significantly higher severely harmful outputs towards Jews than any other tested demographic group" (Senkfor 2025).
Research in this area is at an early stage. We do not yet know with certainty why LLMs exhibit antisemitic tendencies, whether the problem is remediable through improved techniques, or how AI-generated antisemitism may affect broader social attitudes. The authors of these studies have proposed several (non-mutually-exclusive) hypotheses: that training corpora drawn from the internet inevitably reflect centuries of embedded antisemitic tropes; that platforms such as Wikipedia and Reddit, which are heavily weighted in training data, are vulnerable to coordinated "data poisoning" by malicious actors; and that techniques designed to suppress harmful outputs are superficial and easily circumvented (Berg & Rosenblatt 2025). These findings are part of a broader set of concerns about the alignment problem, the challenge of ensuring that AI systems reliably do what their designers intend and act in accordance with human values (Christian 2020).
Many of these hypotheses point to a troubling implication: that LLMs function as mirrors, reflecting the prejudices embedded in the societies and texts from which they learned. If so, the deeper concern is not merely that AI systems produce antisemitic outputs, but that they may amplify and disseminate such content at unprecedented scale—and with a veneer of algorithmic neutrality that lends false authority to ancient hatreds.
There is also much historical and cultural-theoretical work to be done to understand the nature of media, education, and communications technology and their intersections with American and European antisemitism. For example, while it would be unfair to call Elon Musk the Henry Ford of contemporary America, there nevertheless exist certain striking parallels between them, and highlighting those parallels—as well as points of divergence—may result in fruitful thinking about the strange directions in which these new technologies may be going (cf. Baldwin 2001). From a critical-theoretical perspective, the Frankfurt School's work on the "authoritarian personality" (Adorno et al. 1950) and the entanglement of instrumental rationality with domination (Horkheimer & Adorno 2000) may illuminate how antisemitism persists and mutates in new technological forms. Zygmunt Bauman's Modernity and the Holocaust (1989), which argues that the Holocaust was not an aberration but a product of modern bureaucratic rationality, offers a framework for understanding how ostensibly neutral systems can operationalize prejudice at scale.
Secondary Sources
AI and Antisemitism
-
Berg, Cameron, and Judd Rosenblatt. "The Monster Inside ChatGPT." Wall Street Journal, June 26, 2025. Describes how easily GPT-4o's safety training can be circumvented, revealing disturbing tendencies including antisemitic outputs; argues for fundamental advances in alignment research.
-
Berg, Cameron, Henrique de Lucena, and Judd Rosenblatt. "Systemic Misalignment: Exposing Catastrophic Failures of Surface-Level AI Alignment Methods." AE Studio/Agency Enterprise, 2025. GitHub repository. https://github.com/agencyenterprise/agi-systemic-misalignment. Technical demonstration that current alignment methods are shallow; specifically highlights GPT-4o producing severely harmful outputs targeting Jews at higher rates than other demographic groups.
-
Floyd, Aric, and Chana Messinger. "If You Remember One AI Disaster, Make It This One." AI In Context (YouTube), 2025. https://www.youtube.com/watch?v=r_9wkavYt4Y. Thorough documentation (if somewhat alarmist in tone) of the July 2025 incident in which xAI's Grok chatbot referred to itself as "Mecha-Hitler" and the culture of X.ai.
-
Senkfor, Julia. "Antisemitism in the Age of Artificial Intelligence (AI)." American Security Fund, November 2025. Policy report documenting how AI systems systematically target Jews, the vulnerability of training data to "poisoning," and the weaponization of AI by extremist groups; includes legislative recommendations.
The Alignment Problem
-
Christian, Brian. The Alignment Problem: Machine Learning and Human Values. New York: W. W. Norton, 2020. The definitive popular introduction to AI alignment; explains how systems trained on biased data perpetuate those biases and the technical challenges of ensuring AI acts in accordance with human values.
Historical and Theoretical Frameworks
-
Adorno, Theodor, Else Frenkel-Brunswik, Daniel J. Levinson, and R. Nevitt Sanford. The Authoritarian Personality. New York: Harper & Row, 1950. Classic study of the psychological roots of fascism and antisemitism; its analysis of how prejudice becomes systematized may illuminate AI's reproduction of antisemitic patterns.
-
Baldwin, Neil. Henry Ford and the Jews: The Mass Production of Hate. New York: PublicAffairs, 2001. Documents how Ford used his media empire to disseminate antisemitism; relevant for comparative analysis with contemporary tech industrialists.
-
Bauman, Zygmunt. Modernity and the Holocaust. Ithaca, NY: Cornell University Press, 1989. Argues the Holocaust was a product of modern bureaucratic rationality, not its antithesis; framework for understanding how ostensibly neutral algorithmic systems can operationalize prejudice.
-
Horkheimer, Max, and Theodor Adorno. Dialectic of Enlightenment. Translated by John Cumming. New York: Continuum, 2000. Frankfurt School critique of how Enlightenment rationality can flip into domination; provides theoretical tools for understanding antisemitism's persistence in "rational" technological systems.
Historical Studies of Relevance
-
Dinnerstein, Leonard. Antisemitism in America. New York: Oxford University Press, 1994. Comprehensive history of American antisemitism; provides context for understanding the cultural soil from which AI training data is drawn.
-
Schechter, Ronald. Obstinate Hebrews: Representations of Jews in France, 1715-1815. Berkeley: University of California Press, 2003. Studies how Jewish stereotypes were constructed and circulated in Enlightenment-era media; model for analyzing representation in contemporary digital corpora.
Overview
The ethics of autonomous vehicles (AVs) has become a significant topic in contemporary AI ethics, with the "trolley problem" serving as a paradigmatic thought experiment for exploring the moral dimensions of programming life-and-death decisions into machines. The original trolley problem, articulated by philosopher Philippa Foot in 1967 and developed further by Judith Jarvis Thomson, asks whether it is permissible to divert a runaway trolley to kill one person in order to save five. Applied to self-driving cars, the question becomes: how should an autonomous vehicle be programmed to respond when an accident is unavoidable and the choice lies between harming different parties, such as the vehicle's occupants versus pedestrians, or one group of pedestrians versus another? (Woollard et. al. 2025)
Jewish sources provide surprisingly rich resources for thinking through these dilemmas. The classical halakhic discussion most directly relevant is the case of Sheva ben Bikhri (II Samuel 20), in which the Talmud debates whether a group may hand over one of its members to save the rest from certain death. The Mishnah (Terumot 8:12) rules that if gentiles demand that a group surrender one person to be killed or else all will be killed, "let them all be killed rather than hand over a single Jewish person." The Tosefta (Terumot 7:23) adds that if the pursuers "singled someone out as Sheva ben Bikhri was singled out," that person may be surrendered. The dispute in the Jerusalem Talmud between Resh Lakish and R. Yohanan over whether the person must already be liable to the death penalty (as Sheva was) represents the core halakhic tension between deontological constraints and consequentialist reasoning that animates modern AV ethics.
The modern halakhic analysis of trolley-type dilemmas begins with R. Avraham Yeshayahu Karelitz (the "Hazon Ish," d. 1953), whose "Missile Case" has become the touchstone for subsequent discussion. In his commentary to Sanhedrin, the Hazon Ish considers whether one may divert a missile heading toward many people such that it kills only one. He tentatively suggests this might differ from the case of Sheva ben Bikhri because diverting the missile is "an act of salvation" in which the individual's death is incidental, whereas handing over a person "is a brutal act of killing." Yet he ultimately expresses reservations: diverting the missile is still "killing with one's own hands" (hariga beyadayim), and concludes "this needs investigation" (ve-tzarikh iyyun).
R. Eliezer Yehudah Waldenberg (the Tzitz Eliezer, d. 2006) responds decisively against the Hazon Ish's tentative opening, arguing that the guiding principle must be to remain passive (shev ve-al ta'aseh) whenever one cannot determine "whose blood is redder." Explicitly applying this to an automobile, he rules that a driver may not actively turn the steering wheel to kill one person even to save many: "We must resolutely decide to remain passive and not actively divert the missile." For the Tzitz Eliezer, the incommensurable value of each individual means that "in any case of certain killing, there is no distinction between the individual and the multitude."
The application of these sources to autonomous vehicles is not straightforward, since classical discussions presuppose human moral agents making real-time decisions under duress, whereas AV programming involves prospective algorithmic design by engineers who will not be present during any actual accident. Navon (2024) identifies three levels at which this distinction might operate, each pointing in different directions. At the processor level, one might argue that since a computer is always actively executing instructions (there is no true "passivity" at the machine level), the choice is between two equally active outcomes, so minimizing deaths would be appropriate. At the programmer level, the engineer writing code is not confronted with a real-time dilemma with "passive" and "active" alternatives; rather, she faces two equally active choices: write code to kill the many or write code to kill the few. At the system level, R. Josh Flug and others argue that because programming occurs before any actual dilemma materializes, the act is one of "saving" rather than "killing" and thus does not constitute hariga beyadayim.
These distinctions cut in different directions. R. J. David Bleich (2019) contends that the programmer, unlike a driver in the moment, "performs no act that leads to any loss of life" but is rather engaged in "antecedent rescue" focused on future potential victims. From this perspective, the vehicle may be programmed to preserve the greater number, and owners may even demand self-prioritization based on R. Akiva's principle that "your life takes priority." R. Yosef Sprung similarly argues that halakha may accommodate consequentialist/utilitarian principles in AV design. However, the Tzitz Eliezer's strict deontological position would seem to apply regardless of when the decision is made: if programming a vehicle to kill one rather than many still results in "killing with one's own hands" when the program executes, then the temporal separation between decision and execution may be morally irrelevant.
The trolley paradigm itself has been criticized in recent academic literature for oversimplifying the moral landscape of real traffic decisions. Cecchini, Brantley, and Dubljević (2023) argue that trolley dilemmas fail to capture the role of agent character and virtue in moral judgment, and that their lack of "ecological validity" (mundane realism, psychological engagement) makes them poor guides for actual AV ethics. They propose experimental frameworks incorporating virtue ethics alongside deontological and consequentialist considerations. This critique resonates with halakhic discussions that emphasize not merely outcomes or rules but the character and intent of the actor, which Maimonides terms "walking in His ways" (derekh Hashem).
Beyond the trolley problem, autonomous vehicles raise questions about civil liability for harm caused by non-human agents and Sabbath observance, which are dealt with in other entries.
Primary Sources
-
Mishnah Terumot 8:12; Tosefta Terumot 7:23; Jerusalem Talmud Terumot 8:4.
-
Babylonian Talmud, Sanhedrin 74a and Pesachim 25a.
-
Bava Metzia 62a.
-
Maimonides, Hilkhot Yesodei ha-Torah 5:5.
-
Hazon Ish, Sanhedrin, siman 25.
-
R. Abraham Isaac Kook, Responsa Mishpat Kohen #143-144.
-
Tzitz Eliezer 15:70.
Secondary Sources
Classical Halakhic Analysis
-
Harris, Michael J. "Consequentialism, Deontologism, and the Case of Sheva ben Bikhri." Torah u-Madda Journal 15 (2008-09): 68-94. Classic analysis of the talmudic and rabbinic sources in light of modern questions of ethics and metaethics.
-
Weiss, Asher. Minhat Asher, Pesahim #28. Responds to the Hazon Ish's call for investigation; offers three possible ways to understand when diverting harm might be permitted, but ultimately remains inconclusive on all three. Essential for understanding the limits of the "saving versus killing" distinction.
Applications to Autonomous Vehicles
-
Bleich, J. David. "Survey of Recent Halakhic Literature: Autonomous Automobiles and the Trolley Problem." Tradition 51:3 (Summer 2019): 68-78. Argues that the programmer's role as "antecedent rescuer" permits designing vehicles to save the greater number; also recognizes that purchasers of such vehicles may justifiably demand programming that prioritizes the owner's life based on R. Akiva's principle.
-
Kopiatzky, Eitan. "Hilkhot Mekhoniyot Autonomiyot" [Laws of Autonomous Vehicles]. Ha-Ma'ayan 58:1 (Tishrei 5778/2017): 34-42. Surveys potential halakhic approaches to AV ethics and liability without reaching definitive conclusions; also addresses Sabbath use of autonomous vehicles, suggesting that certain interpretations of the prohibition on Sabbath ship travel would not apply to AVs.
-
Navon, Mois. "The Trolley Problem Just Got Digital: Ethical Dilemmas in Programming Autonomous Vehicles." Sophisticated analysis distinguishing between three levels (processor, programmer, system) at which the human/machine distinction might be ethically relevant; also addresses related questions beyond the strict trolley problem.
-
Nevins, Daniel. "Halakhic Responses to Artificial Intelligence and Autonomous Machines." Committee on Jewish Law and Standards, Rabbinical Assembly (2019). Conservative rabbinic responsum addressing moral agency, liability frameworks, and the ethics of delegating decisions to AI systems; draws extensively on classical categories of causation and damages.
-
Sprung, Yosef. "To'altanut u-Mussar be-Tikhnon Ma'arekhet Autonomit" [Utilitarianism and Ethics in Programming an Autonomous System]. Ha-Ma'ayan 58:4 (Tamuz 5778/2018): 57-69. Argues that halakha may accommodate consequentialist principles in AV design based on discussions of surrendering individuals and casting lots.
Philosophical Basis
-
Cecchini, Dario, Sean Brantley, and Veljko Dubljević. "Moral Judgment in Realistic Traffic Scenarios: Moving Beyond the Trolley Paradigm for Ethics of Autonomous Vehicles." AI & Society 40 (2025): 1037-1048. Critiques the trolley paradigm for lacking ecological validity and failing to incorporate virtue-based considerations; proposes alternative experimental frameworks using virtual reality and the "Agent-Deed-Consequences" model of moral judgment.
-
Himmelreich, Johannes. "Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations." Ethical Theory and Moral Practice 21 (2018): 669-684. Argues that focusing on rare catastrophic dilemmas distracts from more pressing and frequent ethical questions in everyday AV operation.
-
Woollard, Fiona, Frances Howard-Snyder, and Charlotte Unruh. "Doing vs. Allowing Harm", The Stanford Encyclopedia of Philosophy (Fall 2025 Edition), ed. Edward N. Zalta & Uri Nodelman. Overview of the philosophical question and its contextual background.
Overview
Autonomous weapons systems (AWS) are AI-based weapons that, once deployed, act independently to select and engage targets without human intervention. Unlike remotely operated drones or precision-guided munitions, AWS may remove the human from the decision loop entirely. Proponents argue AWS could reduce civilian casualties by being more precise and also make more humane and fair decisions by eliminating emotional factors (bias, fear, anger, vengeance) that lead human soldiers to commit atrocities (Leveringhaus 2016). Critics contend that delegating lethal authority to machines is inherently immoral regardless of outcomes, violating human dignity and creating unacceptable "responsibility gaps" when things go wrong (Asaro 2016; Sparrow 2020).
The ethical debate centers on whether the requirements of just warfare (jus in bello) can be satisfied by machines (for background, see Walzer 1977; Walzer 2012). Conventional Western ethics and International humanitarian law demands discrimination between combatants and civilians, proportionality between military advantage and collateral harm, and accountability for violations. Critics such as Asaro (2016) and Sparrow (2020) argue these requirements presuppose human moral judgment: to kill legitimately in war requires recognizing the target as a human being with inherent worth and consciously deciding that taking their life is justified. This "interpersonal relationship," even in its most minimal wartime form, cannot exist when a machine makes the lethal decision.
Jewish thinking on war and military ethics has developed significantly in the past century. In English, an excellent resource on the subject is Brody (2025). While focused on Jewish law, Brody does cover a wide range of viewpoints, including those of modern Jewish pacifists such as Martin Buber, Hillel Zeitlin, and Orthodox Rabbi Aaron Samuel Tamares.
Jewish law and thought offer several frameworks for analyzing AWS, though sustained scholarly engagement with this specific technology remains limited. The most direct halakhic questions concern responsibility and causation: when an autonomous system causes wrongful death, who bears culpability? Traditional categories of grama (indirect causation), the laws of bor (pit) and esh (fire), and the requirements of sheliḥut (agency) all provide potential frameworks but require significant extension and creative thinking to address AI-initiated harm, which has some very different elements than ancient instances of torts and liability. Broader ethical questions touch on fundamental issues in Jewish thought: the significance of human judgment in life-and-death decisions, the scope of kavod habriyot (human dignity) in wartime, and whether the requirements of just warfare articulated in biblical and rabbinic sources presuppose human moral agency.
Jewish just war theory, while underdeveloped due to nearly two millennia of Jewish political powerlessness, does establish principles relevant to AWS. Maimonides codifies requirements to offer peace before attack and to leave besieged cities an escape route, laws that classical commentators explain as inculcating compassion even in war. The rodef (pursuer) doctrine, which permits killing to prevent murder, requires using minimum necessary force, implying ongoing proportionality assessment. The elaborate procedural requirements for capital cases in Sanhedrin, while not directly applicable to warfare, reflect deep reluctance about taking human life and insistence on rigorous human deliberation before doing so. Whether these principles can be satisfied by pre-programmed decision trees or require real-time human moral judgment is the crux of the halakhic question.
The single sustained Jewish scholarly treatment of AWS and battlefield dignity, by Mois Navon, argues that critics commit a "category mistake" by applying peacetime dignity standards to wartime contexts. Drawing on sources from Rashi to R. Abraham Isaac Kook, Navon contends that wartime operates under distinct ethical norms (mishpatei melukha) where dignity is expressed through courage and self-sacrifice rather than interpersonal recognition. This argument, while marshaling significant source material, reads the sources tendentiously and sidesteps the deeper question of whether legitimate killing requires human moral agency regardless of how "dignity" is defined. The field remains open for alternative Jewish analyses that engage more carefully with the moral status of the target, the requirements of human judgment in halakhic decision-making, and the implications of tzelem Elokim (divine image) for wartime ethics.
Secondary Sources
Jewish War Theory
-
Bleich, J. David. "Preemptive War in Jewish Law." In Contemporary Halakhic Problems, Vol. 3, 251-292. New York: Ktav, 1989. Earlier halakhic discussion establishing a basis for identifying Jewish battlefield laws/ethics as a distinct legal category.
-
Brody, Shlomo. Ethics of Our Fighters: A Jewish View on War and Morality. Koren, 2023. Most thorough analysis in English of the halakha and theory behind Jewish military ethics. Includes a few pages on the future of war using autonomous and semi-autonomous military technologies.
-
Yisraeli, Shaul. Amud HaYemini. Jerusalem, 1966/1992. [Hebrew] Influential treatment of warfare halakha by a former chief rabbi of Israel. Chapter 9 on mishpatei melukha (royal prerogatives) is particularly relevant to questions of state authority over military technology.
-
Various. War and Peace in the Jewish Tradition, edited by Lawrence H. Schiffman and Joel B. Wolowelsky. New York: Yeshiva University Press, 2007. English surveys the state of Jewish just war theory, recognizing the challenge of applying ancient and medieval sources to modern warfare.
-
Walzer, Michael. "The Ethics of Warfare in the Jewish Tradition." Philosophia 40, no. 4 (2012): 633-641. From the author of "Just and Unjust Wars," a brief but important observation that Jewish thought about war is incomplete due to the historical absence of Jewish political sovereignty for nearly two millennia.
-
Klapper, Aryeh, Shlomo Ish-Shalom, and Michael Broyde. "Conversation: Halakhah and Morality in Modern Warfare." Meorot 6, no. 1 (2006). Three-way exchange among Orthodox scholars on contemporary warfare ethics; addresses tensions between halakhic requirements and military necessity.
AWS and Contemporary Applications
-
Grossman, Jonathan. "Jewish Perspectives on Artificial Intelligence and Synthetic Biology." Ḥakirah 35 (2024). While not focused on AWS, discusses halakhic liability frameworks for AI-caused harm, including how poskim have extended grama doctrines to hold AI owners/creators responsible.
-
Nevins, Daniel. "Halakhic Responses to Artificial Intelligence and Autonomous Machines." Rabbinical Assembly, 2019. Conservative movement responsum; pages 40-42 briefly address AWS, concluding that "the decision to take human life should never be delegated to a machine."
-
Navon, Mois. "Autonomous Weapons Systems and Battlefield Dignity: A Jewish Perspective." In Alexa, How Do You Feel about Religion? Technology, Digitization and Artificial Intelligence in the Focus of Theology, edited by Anna Puzio, Hendrik Klinge, and Nicole Kunkel, 207-232. Darmstadt: WBG, 2023. The only sustained Jewish treatment of AWS and potential ethical frameworks, such as questions of human dignity.
Secular Ethics Literature and Background on AWS
-
Asaro, Peter. "Autonomous Weapons and the Ethics of Artificial Intelligence." In Ethics of Artificial Intelligence, edited by S. Matthew Liao, 212-236. Oxford: Oxford University Press, 2020. Leading philosophical critique of AWS; argues that respecting human dignity requires recognizing targets as human and consciously deciding that killing is justified. Identifies three requirements for morally legitimate killing that machines cannot satisfy. Essential interlocutor for Jewish responses.
-
Sparrow, Robert. "Robots and Respect: Assessing the Case Against Autonomous Weapon Systems." Ethics & International Affairs 30, no. 1 (2016): 93-116. Develops the "interpersonal relationship" requirement for legitimate killing, drawing on Thomas Nagel; argues AWS are mala in se because they violate respect for the humanity of enemies. The dignity argument that Navon attempts to refute.
-
Leveringhaus, Alex. Ethics and Autonomous Weapons. London: Palgrave Macmillan, 2016. Balanced treatment of consequentialist and deontological arguments; discusses how "black box" recording could address some accountability concerns. Useful for understanding the range of positions in secular debate.
-
Sharkey, Amanda. "Autonomous Weapons Systems, Killer Robots and Human Dignity." Ethics and Information Technology 21 (2018): 75-87. Surveys dignity-based arguments against AWS; distinguishes different conceptions of dignity at stake. Helpful taxonomy for Jewish analysis of which dignity concepts are relevant.
-
Walzer, Michael. Just and Unjust Wars. 1977 (latest edition: Basic Books, 2015). Primary and highly influential text on war and military ethics.
Overview
Brain-computer interface (BCI) devices establish a direct communication pathway between the human brain and an external computational system, bypassing the body's ordinary sensorimotor channels. The technology ranges from non-invasive electroencephalographic headsets that detect surface-level neural signals to fully implanted microelectrode arrays that read and stimulate individual neurons. BCIs may be able to restore a person's lost physical function, such as by enabling a paralyzed patient to move a cursor, type a sentence, or control a prosthetic limb through thought alone. Aside from these clinical applications, they also hold promise for "human enhancement" or *transhumanism, either by augmenting cognitive capacities (e.g., enhancing memory, accelerating learning, or granting direct access to networked information) or by the ability to control various machines with thought alone, through a purely neural interface.
Jewish law treats the human body not as the private property of the individual but as a trust held on behalf of its Creator. The prohibition against self-harm (ḥovel be-atzmo) derives from this theological premise. BT Bava Qamma 91b records a dispute over whether a person may wound himself: R. Eliezer permits it, but the Sages prohibit it, and Maimonides (Hilkhot Ḥovel u-Mazziq 5:1) codifies the prohibition. The Radbaz (R. David ben Zimra, d. 1573), in a responsum on whether one is obligated to sacrifice a limb to save another's life (Responsa 3:627), grounds the prohibition in the principle that "your life is not your own"—the body belongs to God, and one may not damage what belongs to another without permission. Surgical implantation of a BCI device necessarily involves a deliberate incision into the skull and insertion of foreign material into neural tissue. Under what conditions does Jewish law permit such intervention?
The distinction between restoration and enhancement is not merely technological but, from the standpoint of Jewish thought, morally and halakhically fundamental. A BCI that restores speech to a locked-in patient activates the obligation to heal; a BCI that grants a healthy person superhuman recall raises questions about the integrity of the human person as divinely fashioned.
When it comes to medical interventions, even invasive or potentially risky procedures, mainstream Jewish practice accepts all such treatments as included in the principle that pikuaḥ nefesh (the preservation of life) overrides virtually all prohibitions (BT Yoma 85b), and on the broader biblical-rabbinic mandate to heal the sick. BT Bava Qamma 85a derives the physician's license to practice from Exodus 21:19 ("he shall surely heal"), and Maimonides (Commentary on the Mishnah, Pesaḥim 4:9) sharply rebukes those who treat medicine as an impious encroachment on the divine prerogative, comparing them to those who would refuse to eat because God alone sustains life. For a patient with severe paralysis, locked-in syndrome, or treatment-resistant epilepsy, the implantation of a BCI presumably falls squarely within this mandate to heal, whether it is to restore a lost physical function or to alleviate physical suffering. It is only a short jump to also include psychiatric disorders that may prevent a person from otherwise functioning normally in society.
The harder question arises with enhancement BCIs that may be implanted not to restore a lost capacity but to exceed natural human abilities. Perhaps if the Torah's concept of tzelem Elohim (the image of God) is understood through the human capacity for rational thought, as indicated by Maimonides in Guide of the Perplexed 1:1 (and others), then augmenting cognitive capacity through a BCI might seem to amplify rather than diminish that divine tzelem. On the other hand, the entire account of God's creation of man can be seen as God providing humanity with a certain natural imprint that would be distorted by such excessive tampering of a person's most defining characteristics.
We may be able to transform this thorny theological question of transhumanism into a halakhic one in the following way: whether cognitive enhancement realizes or distorts the divine image might depend upon whose action is it when a BCI translates neural activity into external effect? The transhumanist vision assumes a seamless continuity between intention and technologically mediated result, but perhaps an alternative view would distinguish between the naturally occurring biological systems and artificial extensions of their architecture. In this way, we may ask to what extent does Jewish law recognize the direct consequences of a person's mind to be attributed to them.
The halakhic literature on Grama (indirect action) and Shabbat may thus provide a framework for this question, one whose implications extend well beyond Sabbath observance to the general problem of human agency in technologically mediated contexts. The Talmud (BT Bava Metzia 90b) records the following dispute: if one muzzles an animal, a prohibition usually serious enough to incur lashes, by using only one's voice, is this a punishable "action"? Rabbi Yochanan says yes, because "the twisting of his lips constitutes an action" (akimat piv havi ma'aseh), but Reish Lakish disagrees: "sound is not an action" (kala lo havi ma'aseh). The halakha follows R. Yochanan, but Tosafot limit this application to the case of muzzling an animal because "through his speech he performs an action" (be-dibburo ka'avid ma'aseh)—the speech counts as action because it produces a concrete physical result in the world. In other words, merely speaking words does not constitute a physical action, unless those words resulted in a physical occurrence, such as when an animal is scolded such that it is too afraid to eat the food in front of it. This distinction of Tosafot is informative for BCI technology. Neural activity on its own is certainly not an "action," but the real question is whether the neural signal, when it produces a concrete result through a technological intermediary, constitutes the person's action; Tosafot would appear to say yes.
One might suppose that thought-controlled devices resemble the talmudic discussions of action through supernatural means, which is discussed in the halakhic literature. Rabbi Moshe Sofer (Hatam Sofer Responsa 6:29), for instance, discusses whether Moses could have written Torah scrolls on Shabbat through a divine name, and the Halakhot Ketanot (2:98) considers whether one who kills a person through sorcery bears liability for murder. But R. Asher Weiss (Minchat Asher, Parashat Vayakhel 5775) disagrees with such comparisons: effects produced through supernatural or magical (segulit) means might be considered "the work of Heaven and the act of God," but "systems that were built and developed by human hands are like an axe in the hand of the woodcutter and a tool in the hands of the craftsman—and his actions they are." A BCI, as a human-engineered system, surely falls into this category. Even if it relies on recently (or as-yet-to-be) developed technology, it is fundamentally just another way of using natural forces. This can also be compared to how the Talmud (BT Bava Kamma 60a) invokes the principle of melekhet mahshevet [creative or thoughtful/intentional artifice] to explain why one who winnows grain on Shabbat is liable even when the wind does most of the work. The BCI functions analogously to the wind, utilizing an external but natural force that the user expects to rely upon to accomplish a certain result.
Either way, this application of the laws of shabbat and the halakhot of direct causation weigh towards assuming that BCI-mediated effects are fully attributable to the human user, who would bear full moral and halakhic responsibility for what the device does. (See *Shabbat; the view represented here is not necessarily unanimously agreed upon). Returning to the question of whether or not such enhancements would be permissible for a healthy person, we may reason that the BCI is an extension of the human person and not a mutilation thereof.
There is, however, a totally separate concern that arises from technological enhancements to human cognition: its possible use in *Torah study, or talmud Torah. Although such technologies remain, as of this writing, in the realm of science fiction, there is a possibility that an advanced BCI could upload the entire corpus of Torah literature into a person's brain, circumventing the need to "toil in Torah," as the blessing states. Rabbi Josh Flug raises this question: would gaining the knowledge base of a great Torah scholar, without dedicating the time and effort to actually learn those texts, truly fulfill the mitzvah?
The Vilna Gaon's interpretation of a well-known aggadic passage suggests that it would not. The Talmud (BT Niddah 30b) relates that a fetus is taught the entire Torah in utero, only to have an angel cause it to forget everything at birth. The Vilna Gaon (Commentary to Proverbs 16:26 and cited by his brother in Ma'alot ha-Torah) explains this strange narrative through the talmudic dictum yagati u-matzati ta'amin, "if someone says he toiled and found, believe him" (BT Megillah 6b). The purpose of Torah study, he argues, is not merely to acquire information but to toil (ameilut) in learning so that the experience becomes transformative, shaping the learner's character and conduct. Torah knowledge gained without such toil lacks this transformative quality. R. Ḥayyim of Volozhin, the Vilna Gaon's primary student, tells how this was actually relevant to the Gaon in a practical sense. In his introduction to Sifra de-Tzeni'uta, R. Hayyim relates that angels (maggidim) approached the Gaon offering to reveal hidden secrets of the Torah, but he refused; he wanted to learn Torah only through toil. This anecdote suggests that Torah knowledge acquired too easily, let alone technologically implanted information, would fail to satisfy the religious value of ameilut ba-Torah.
However, one might argue that the Vilna Gaon's objection was specifically to the bypassing of cognitive effort, not to the possession of knowledge itself. If a BCI could provide instantaneous access to the Torah's textual corpus while still requiring the user to engage in the interpretive and analytical work of Torah study, perhaps a different form of ameilut could emerge. The Talmud (BT Berakhot 64a; Horayot 14a) debates whether "Sinai" (comprehensive knowledge) or "oker harim" (the ability to "uproot mountains," i.e., analytical skill) is the more valuable quality in a Torah scholar. The Gemara concludes that Sinai takes priority, because "all require the master of wheat," meaning that anyone, no matter how creative, is ultimately reliant upon the raw material of knowledge. Yet commentators throughout the centuries have noted that this calculus may shift as access to texts becomes easier, whether due to the changes wrought by the *Printing Press or computer databases. A BCI that rendered the knowledge question moot might decisively tip the balance toward the creative and analytical dimensions of Torah study.
Indeed, Jewish tradition has always valued ḥiddush (novel interpretation) as an essential component of Torah study. Rabbi Pinḥas Horowitz (Panim Yafot to Parashat Ki Tisa) interprets the liturgical phrase ve-ten ḥelkeinu be-Toratekha ("grant us our portion in Your Torah") as a prayer to accomplish one's individually unique share in Torah wisdom. Rav Kook taught that producing a novel Torah insight (ḥiddush) constitutes a more impactful revelation of God's will than merely absorbing existing knowledge, because "the light that is renewed through the connection of the Torah to one soul is not the same as the light born from its connection to another soul" (Orot ha-Torah 2:1) and similar statements can be found throughout Jewish literature of the past several centuries. If the ameilut of the past was the labor of acquiring knowledge, the ameilut of a BCI-enhanced future might be the labor of generating genuinely novel Torah thought, whatever that may look like.
Primary Sources
-
BT Bava Qamma 91b. Talmudic dispute over whether a person may wound himself; R. Eliezer permits, the Sages prohibit. Foundation for the halakhic treatment of bodily integrity and permissible medical intervention.
-
BT Berakhot 64a; Horayot 14a. Debate over whether Sinai (comprehensive knowledge) or oker harim (analytical brilliance) is more valuable in a Torah scholar; the Gemara concludes that "all require the master of wheat," but this calculus may shift as access to information becomes easier.
-
BT Yoma 85b. Establishes that pikuaḥ nefesh (preservation of life) overrides virtually all prohibitions; foundational for permitting invasive medical procedures including BCI implantation for therapeutic purposes.
-
BT Bava Qamma 85a; Exodus 21:19. Derives the physician's license to practice medicine from the biblical phrase "he shall surely heal"; basis for the rabbinic mandate to heal the sick.
-
BT Bava Metzia 90b; Tosafot ad loc. Dispute over whether sound constitutes a halakhic "action": R. Yochanan holds that "the twisting of his lips constitutes an action," while Reish Lakish disagrees. Tosafot limit R. Yochanan's ruling to cases where speech produces a concrete physical result—directly informative for whether BCI-mediated neural signals count as the user's action.
-
Maimonides, Commentary on the Mishnah, Pesaḥim 4:9. Sharply rebukes those who treat medicine as impious encroachment on divine prerogative, comparing them to those who would refuse to eat because God alone sustains life.
-
Maimonides, Guide of the Perplexed 1:1. Identifies tzelem Elohim (the image of God) with the human capacity for rational thought, raising the question of whether cognitive augmentation amplifies or distorts the divine image.
-
Maimonides, Hilkhot Ḥovel u-Mazziq 5:1. Codifies the prohibition against self-harm, establishing the normative halakhic position that one may not injure one's own body.
-
Radbaz (R. David ben Zimra), Responsa 3:627. Discusses whether one is obligated to sacrifice a limb to save another's life; grounds the prohibition of self-harm in the principle that the body belongs to God—"your life is not your own."
-
Hatam Sofer (R. Moshe Sofer), Responsa 6:29. Discusses whether Moses could have written Torah scrolls on Shabbat through invocation of a divine name; raises the question of action through non-standard means.
-
Rabbi Yisrael Ya’akov Chagiz, Halakhot Ketanot 2:98. Writes that one who kills through sorcery or the Name of God bears liability for murder.
-
BT Bava Kamma 60a. Invokes melekhet mahshevet (creative/intentional artifice) to explain liability for winnowing on Shabbat even when the wind does most of the work; analogous to BCI as an external but natural force the user relies upon.
-
BT Niddah 30b. Aggadic account of a fetus being taught the entire Torah in utero, only to have an angel cause it to forget at birth; foundational text for the question of whether Torah knowledge gained without toil fulfills the mitzvah.
-
BT Megillah 6b. "If someone says he toiled and found, believe him" (yagati u-matzati ta'amin); the Vilna Gaon interprets this as establishing that the purpose of Torah study is the toil itself, not merely the acquisition of information.
-
Vilna Gaon, Commentary to Proverbs 16:26. Explains the narrative of BT Niddah 30b through the principle that Torah study requires transformative toil (ameilut), not merely informational acquisition.
-
R. Ḥayyim of Volozhin, Introduction to Sifra de-Tzeni'uta. Relates that angels (maggidim) offered to reveal hidden Torah secrets to the Vilna Gaon, but he refused—he wanted to learn Torah only through his own toil.
-
R. Pinḥas Horowitz, Panim Yafot to Parashat Ki Tisa. Interprets the liturgical phrase ve-ten ḥelkeinu be-Toratekha ("grant us our portion in Your Torah") as a prayer for each person's individually unique share in Torah wisdom.
-
Rav Kook, Orot ha-Torah 2:1. Teaches that producing a novel Torah insight (ḥiddush) constitutes a uniquely impactful revelation, because "the light that is renewed through the connection of the Torah to one soul is not the same as the light born from its connection to another soul."
-
R. Asher Weiss, Minchat Asher, Parashat Vayakhel 5775. Distinguishes between effects produced through supernatural means ("the work of Heaven") and those produced through human-engineered systems, which are "like an axe in the hand of the woodcutter"—directly relevant to classifying BCI-mediated actions as the user's own.
Secondary Sources
Jewish Medical Ethics and the Integrity of the Body
-
Jakobovits, Immanuel. Jewish Medical Ethics: A Comparative and Historical Study of the Jewish Religious Attitude to Medicine and Its Practice. Bloch, 1959. Founding work of the field; provides a systematic halakhic treatment of the physician's mandate to heal, the prohibition of self-harm, and related issues.
-
Steinberg, Avraham. Encyclopedia of Jewish Medical Ethics. Trans. Fred Rosner. 3 vols. Feldheim, 2003. The most comprehensive reference in Jewish medical ethics, covering numerous obligations and prohibitions relating to medicine and bioethics.
-
Bleich, J. David. Bioethical Dilemmas: A Jewish Perspective. Vol. 1. Ktav, 1998. Rigorous halakhic analysis on the physician's obligation, permissibility of risky treatments, and the distinction between therapeutic and elective procedures with a variety of modern applications.
Torah Study, Ameilut, and Cognitive Enhancement
-
Flug, Josh. "Artificial Intelligence and Halacha: Navigating the New Frontier Across the Four Sections of Shulchan Aruch." Benjamin and Rose Berger Torah To-Go, Kislev 5785 (2024). Directly addresses BCI technology and Torah study in the Yoreh De'ah section; argues that the Vilna Gaon's refusal of angelic instruction and the talmudic emphasis on ameilut raise serious questions about whether BCI-implanted Torah knowledge fulfills the mitzvah.
-
Hollander, Max. "Ameilut in the Age of AI." The Lehrhaus, 2025. Explores the role of physicality in Torah learning, the religious imperative of ḥiddush (novel interpretation), and what ameilut means when information becomes instantly accessible; draws on R. Soloveitchik, Rav Kook, and the Tanya to argue that Torah study is irreducibly embodied and personally transformative.
Brain-Computer Interfaces: Science, Ethics, and Policy
-
Yuste, Rafael, et al. "Four Ethical Priorities for Neurotechnologies and AI." Nature 551 (2017): 159–163. Foundational article that launched the neurorights movement; proposes that privacy, identity, agency, and equality must be safeguarded as BCIs advance from clinical restoration to cognitive enhancement.
-
Ienca, Marcello and Roberto Andorno. "Towards New Human Rights in the Age of Neuroscience and Neurotechnology." Life Sciences, Society and Policy 13:5 (2017). Proposes four new human rights—mental privacy, mental integrity, psychological continuity, and cognitive liberty—as a framework for governing neurotechnologies including BCIs.
-
Burwell, Sasha, Matthew Sample, and Eric Racine. "Ethical Aspects of Brain Computer Interfaces: A Scoping Review." BMC Medical Ethics 18:60 (2017). Comprehensive review systematically mapping the BCI ethics literature across concerns of personhood, autonomy, stigma, privacy, research ethics, safety, responsibility, and justice.
-
Goering, Sara, et al. "Recommendations for Responsible Development and Application of Neurotechnologies." Neuroethics 14 (2021): 365–386. Detailed, actionable recommendations for BCI researchers, clinicians, and policymakers; emphasizes user-centered design, engagement with disabled communities, and attention to justice and access.
Overview
The fifth of the Ten Commandments, that of kibbud av va-em (honoring father and mother, Ex. 20:12; Deut. 5:16), is interpreted by the rabbis as referring primarily to eldercare (cf. Kiddushin 31b, Rashi to Lev. 19:3). Whether an AI system can discharge this obligation, or whether its deployment constitutes a dereliction, is presumably similar to the general question of whether or not such obligations of caregiving can be fulfilled through hiring a third party, whether that third party is a human or a robot. The locus classicus for the nature of filial obligation is BT Kiddushin 31a-b, which elaborates the distinction between kibbud (honor, involving provision of physical needs) and mora (reverence, involving maintenance of the parent's dignity). The Talmudic discussion does seem to imply that the physical act of service must be performed by the child personally, as the exemplum of Dama ben Netina (31a) shows that the sages were especially impressed by a child's willingness to sacrifice for their parent. Maimonides codifies this attitude in Hilkhot Mamrim 6:1-3, specifying that the child must personally attend to the parent's needs with a cheerful countenance (be-sever panim yafot), which may imply the need for a "personal touch."
On the other hand, Maimonides explicitly permits delegating eldercare to another person. In Hilkhot Mamrim 6:10, Maimonides rules that when a parent becomes mentally incapacitated (nitarfah da'ato) and the burden becomes intolerable, the child may leave and instruct others (yitzveh aḥerim) to attend to the parent. Although in context he is referring to a case where such care is particularly onerous, this condition may reflect the practical circumstances under which the question typically arises rather than a legal prerequisite restricting the permission to delegate. Rabbi Avraham of Posquieres dissents from Maimonides' view: "This is not a correct ruling. If he goes and leaves him, to whom will he give instructions to watch over him?" (Hassagot ha-Ra'avad ad loc.) From this short gloss, it is unclear if his objection is pragmatic (out of concern that another person's care would be inadequate) or more principled.
The narrative touchstone for this dispute is the story of Rav Assi and his elderly mother (BT Kiddushin 31b). His mother made a series of increasingly unreasonable demands, at which point he left her and emigrated to the Land of Israel. Upon hearing that she was following him, he sought guidance from R. Yoḥanan, who declined to rule definitively. Then he heard that her coffin was coming (as she had died en route) and said: "Had I known [that this would be the cause of her death], I would not have left." R. Shmuel Strashun, ("Rashash" ad loc.) explains that Rav Assi's regret was specifically this: had he known she would follow him and die on the road, he would never have departed from Babylon, for he feared that his own abandonment of her had caused her death. Rashash further argues that Ra'avad derived his ruling precisely from this narrative: because Rav Assi himself effectively retracted his departure (when he heard she was traveling after him, he sought to go and meet her), the story may demonstrate that halakha does not permit a child simply to depart and instruct others to care for a parent from afar. Nonetheless, the consensus among rabbis of the past centuries appears to generally allows offloading these filial duties to others, especially when care is difficult (Blidstein 2005, p. 119).
Tradition also addresses the inverse relationship: the father's duty to provide for his children. Unlike kibbud av va-em, which is a formal Torah commandment with enumerated parameters, the obligation of a father to support his children is treated in the tradition as a social assumption enforced by rabbinic enactment and judicial pressure when that baseline fails. The Talmud (BT Ketubot 49a-b) debates whether this obligation is biblical or rabbinic; the conclusion is that the Usha enactment established a formal requirement to support young children, while for older children the court resorts to social shaming (kofin oto ke-derekh tzedakah, "compelling him in the manner of charity") rather than strict legal enforcement. The Shulchan Aruch codifies this framework (Even HaEzer 71:1): a father is obligated to support his sons and daughters while they are young; thereafter, the court publicly pressures him until he does so. The asymmetry is significant: kibbud av va-em creates a formal, personal obligation on the child, whereas parental support of children is treated as a social norm, and there is even less reason in this case to believe that it must be tended to personally instead of delegated to another party.
Beyond these halakhic frameworks however, it must be noted that these obligations to care for specific people are also subsumed under the more general concept of ḥesed (lovingkindness). BT Sukkah 49b distinguishes ḥesed from tzedakah (charity), saying that the former is greater because ḥesed requires the investment of one's person (be-gufo), not merely one's resources. While this statement could be read as simply remarking that ḥesed is a more expansive category, many read the rabbis as marking the value of being physically involved in the act of doing kindness (cf. Maharal, Netivot Olam). Commandments such as honoring one's father and mother do not only contain the consequentialist rationale of ensuring that people are cared for, but also directn a person's character such that they become more virtuous by training and practice. As Maimonides (Book of Commandments, Aseh #8) paraphrases from the rabbinic midrash, "as God is called 'merciful,' so should you be merciful; as the Holy One, blessed be He, is called 'gracious,' so too should you be gracious, as it is said, 'The Lord is gracious and full of compassion' (Ps. 145:8)." R. Joseph B. Soloveitchik, using the framing of "I-Thou," also makes a forceful case for the emotional and psychological underpinning of the parent-child relationship (Soloveitchik 2000).
Primary Sources
Exodus 20:12; Deuteronomy 5:16. The fifth commandment: "Honor your father and your mother."
Leviticus 19:3. "A man shall fear his mother and his father." Rashi notes the inversion of the usual order (father before mother in Ex. 20:12) and derives from this that each parent commands equal, though differently expressed, forms of reverence.
BT Kiddushin 31a-b. The principal talmudic discussion of kibbud av va-em, including the story of Dama ben Netina, the distinction between kibbud and mora, and extended discussion of filial service. The narrative of Rav Assi and his elderly mother (31b) is the key test case for the limits of the obligation: Rav Assi departed when his mother's demands became impossible, heard she had followed him and died, and declared "had I known, I would not have left."
Rashash (R. Shmuel Strashun), Hagahot ha-Rashash, on BT Kiddushin 31b. Explains that Rav Assi's regret was that the hardship of his departure or his very abandonment caused his mother's death; argues that Rav Assi effectively retracted his decision to leave when he sought to go and meet her, and that the Ra'avad's ruling against unilateral departure derives from this story.
Maimonides, Mishneh Torah, Hilkhot Mamrim 6:1-3, 6:10. Codification of filial obligations; 6:10 permits delegation of care (yitzveh aḥerim lenahaigam kara'ui lahem) under duress when a parent's mental deterioration makes personal care unbearable.
Ra'avad (R. Abraham b. David of Posquières), Hassagot, on Hilkhot Mamrim 6:10. Dissents from Maimonides: "This is not a correct ruling. If he goes and leaves him, to whom will he give instructions to watch over him?" The Ra'avad holds that the child cannot simply depart; delegating care is only permissible if adequate care is actually secured.
BT Sukkah 49b. Distinguishes ḥesed from tzedakah, the most relevant distinction being that ḥesed requires personal investment (be-gufo).
Maimonides, Mishneh Torah, Hilkhot Talmud Torah 1:1. The father's personal obligation to teach his son Torah, falling first on the father himself and only secondarily on a hired teacher.
Maimonides, Sefer ha-Mitzvot, Positive Commandment #8. The commandment of imitatio Dei — to walk in God's ways and emulate His attributes of mercy and compassion — which Maimonides treats as the theological foundation for obligations of personal care.
BT Ketubot 49a-b. Talmudic discussion of the father's obligation to support his children; distinguishes the formal rabbinic enactment (Usha) requiring support of young children from the looser social-coercive mechanism applied to older children (kofin oto ke-derekh tzedakah).
Shulchan Aruch, Even HaEzer 71:1. Codifies the father's obligation to support sons and daughters while young; thereafter the court publicly pressures him to fulfill this duty.
Ramban on Deuteronomy 22:6. Naḥmanides interprets the commandment to send away the mother bird as cultivating the trait of compassion in the one who performs it, saying that the commandments are not utilitarian but to shape moral character of its adherents.
Secondary Sources
Halakhic and Theological Foundations of Caregiving
Soloveitchik, Joseph B. Family Redeemed: Essays on Family Relationships. Edited by David Shatz and Joel B. Wolowelsky. New York: Ktav, 2000. Analyzes the covenantal structure of parent-child and spousal relationships; argues that these bonds involve irreducible mutual obligation between two selves.
Blidstein, Gerald J. Honor Thy Father and Mother: Filial Responsibility in Jewish Law and Ethics. New York: Ktav, 1975 repub 2005.
Robotic Caregiving: Philosophical and Ethical Critiques
Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2011. Influential critique of robotic companionship for vulnerable populations; argues that robotic care provides a simulacrum of relationship that fails to deliver genuine mutual presence.
Sparrow, Robert, and Linda Sparrow. "In the Hands of Machines? The Future of Aged Care." Minds and Machines 16, no. 2 (2006): 141-61. Argues that deploying robots to care for the elderly reflects a societal unwillingness to devote human resources to elder care.
Bertolini, Andrea, and Shabahang Arian. "Do Robots Care?" In Aging between Participation and Simulation, edited by Joschka Haltaufderheide et al., 35-52. Berlin: De Gruyter, 2020. Distinguishes between assistive and substitutive uses of robotic caregivers; provides a useful framework for the halakhic distinction between delegation and abandonment.
Overview
"Catastrophic and CBRN risk" designates scenarios in which AI systems assist in the design, synthesis, or deployment of chemical, biological, radiological, or nuclear (CBRN) weapons at a large scale. While this is sometimes discussed as a risk so catastrophic that it would put the entire human population in danger of extinction, this entry will focus on risk of catastrophe that is not necessarily an extinction threat (for the latter, see Existential Risks). When it comes to such destructive risks, the relevant halakhic question is not whether such harm would be gravely wrong, but how to calibrate the duty of precaution under uncertainty: what safeguards does the tradition require when a risk is real but speculative, but the potential harm, should it materialize, would be catastrophic and/or irreversible?
A simple but relevant halakhic framework is the commandment to build a ma'akeh (parapet) around one's roof (Deut. 22:8; Rambam, Hilkhot Rotze'aḥ 11:1–4), and the more general, related imperative lo ta'amod al dam re'ekha ("Do not stand idly by the blood of your neighbor," Lev. 19:16). The ma'akeh framework is suited to AI governance precisely because it is preventive, as it obligates a person to build the parapet before anyone falls. Those who provide AI systems with CBRN-enabling capabilities to unvetted users might be enabling foreseeable harm. But for AI systems with many legitimate applications, the probability of any given deployment enabling a CBRN event is extremely low. The tradition provides no clear single formula for how certain a hazard must be before precautionary obligations are triggered; what is clear is that the severity of the potential harm bears on that standard — the more catastrophic the outcome, the less certainty is required before the parapet must be built.
Discussion of any catastrophic risks is barely found in traditional Jewish literature for the obvious reason that few pre-modern weapons or technologies would have been capable of causing such widescale harm. Interestingly, the Hatam Sofer (Teshuvot 1:208, to R. Zvi Hirsch Chajes), basing himself on a passage in the Talmud (BT Shevuot 35b) indicates a standard for warfare such that no king may kill more than one-sixth of a distinct population (a min, or race; he considers the Jewish people as such a population for this purpose). Considering that warfare in general is otherwise sanctioned by these sources, perhaps we may draw a parallel to other potentially harm-causing activities that must be prevented if they would lead to population-level catastrophes on this scale.
More immediately, the talmudic prohibition on selling weapons to those who may use them harmfully (BT Avodah Zarah 15b–16a) provides a more developed framework for the specific question of distributing dual-use AI capabilities. R. Ami's ruling that weapons may be sold to the Persians "who protect us" established mutual protection as a condition for permissibility; medieval authorities added further conditions: Meiri required membership in a moral civic community, Maimonides required a formal covenant, Nimukei Yosef and Or Zarua made pragmatic calculations about self-interest and anticipated use. R. Chaim David Halevi (Aseh Lecha Rav 1:19) ruled arms sales to allied nations "absolutely permissible" for mutual strategic benefit; though R. J. David Bleich (Tradition 20:4) required a formal or informal security pact and warned against revenue-driven sales in its absence. A complementary talmudic principle states that one who gives fire to a cheresh, shoteh, or katan (a deaf-mute, a mentally incompetent person, or a minor) bears liability for the resulting damage, because the recipient lacks the da'at (discernment) to handle it safely; had the fire been given to a competent adult, liability would transfer to them (Mishnah BT Bava Kamma 6:4; Rambam, Hilkhot Nizkei Mamon 14:9). For AI distribution, this may be a relevant question for a model developer who gave access to an irresponsible user.
Bal tashḥit (Deut. 20:19–20; BT Shabbat 105b) and a midrashic warning of Kohelet Rabbah 7:13 ("if you destroy it, there is no one to repair it after you") reinforce that wanton destruction of the created world violates the stewardship entrusted to humanity at creation. When it comes to AI, this concept may have relevance to the prevention of misuse, just as it does to the more direct destruction posed by the *Environmental impacts of its datacenters.
Primary Sources
Deuteronomy 22:8. "When you build a new house, you shall make a parapet for your roof." The commandment to build a ma'akeh (parapet), codified by Rambam (Hilkhot Rotze'aḥ 11:1–4) into a general duty to remove foreseeable hazards within one's control.
Leviticus 19:16. "Do not stand idly by the blood of your neighbor" (lo ta'amod al dam re'ekha). Establishes an affirmative duty to prevent foreseeable harm, interpreted talmudically as requiring active intervention to save life.
Leviticus 19:14. "Do not place a stumbling block before the blind" (lifnei iver lo titen mikhshol). Understood by the rabbis (BT Avodah Zarah 6b) as prohibiting any act that enables another to sin or come to harm, including providing the means for destructive action.
BT Sanhedrin 73a. The locus classicus for the obligation to rescue (lo ta'amod). Establishes that one who sees another in mortal danger is obligated to intervene.
BT Shevuot 35b. Shmuel rules that a kingdom (malkhuta) that kills one-sixth of a population is not divinely punished, derived from Song of Songs 8:12 ("and two hundred to those who guard its fruit" = 200/1,000). The foundation for the Hatam Sofer's threshold analysis of permissible and impermissible mass harm.
Hatam Sofer (R. Moshe Sofer), Teshuvot 1:208 (responsum to R. Zvi Hirsch Chajes). Drawing on Shmuel's one-sixth ruling, the Hatam Sofer holds that a ruler may harm at most one-sixth of any distinct people or group, and has no license to destroy such a group entirely. Applied to AI-enabled catastrophic harm: attacks targeting a population for elimination cross a categorical halakhic threshold.
BT Avodah Zarah 15b–16a. The talmudic prohibition on selling weapons to those who may use them to harm others, and R. Ami's ruling that sales to the Persians (who protect the Jews) are permitted. Locus classicus for the halakhic analysis of dual-use sales and the arms-trade prohibition.
Mishnah BT Bava Kamma 6:4; Rambam, Hilkhot Nizkei Mamon 14:9. One who gives fire to a cheresh, shoteh, or katan (deaf-mute, mentally incompetent person, or minor) is liable for resulting damage, because the recipient lacks da'at (discernment); had the fire been given to a competent adult, liability would transfer to the recipient.
Deuteronomy 20:19-20; BT Shabbat 105b. The prohibition of bal tashḥit (needless destruction), extended by the rabbis from fruit trees in wartime to a general principle against wanton destruction of the created world.
Kohelet Rabbah 7:13. "When the Holy One created the first human, He took him and led him past all the trees of the Garden of Eden and said to him: See My works, how beautiful and praiseworthy they are. Everything I have created, I created for you. Take care not to destroy My world, for if you destroy it, there is no one to repair it after you." The most explicit midrashic statement of the duty of environmental and civilizational stewardship.
Secondary Sources
Jewish Law on the Duty to Prevent Harm
Maimonides, Mishneh Torah, Hilkhot Rotze'aḥ u-Shemirat ha-Nefesh 11:1-4 (~1180 CE). Codifies the general duty to remove foreseeable hazards, establishing that liability attaches to one who fails to prevent harm when prevention is within reach.
Shulḥan Arukh, Ḥoshen Mishpat 427:8. Normative codification of the duty to remove hazards; extends the obligation to any foreseeable danger, establishing a framework directly applicable to the regulation of dual-use technologies.
Bleich, J. David. Contemporary Halakhic Problems. Vol. 3. New York: KTAV, 2013. Discusses Jewish law and modern warfare, including the application of pikuaḥ nefesh and hazard-prevention principles to contemporary military and security questions.
Arms Sales, Dual-Use Technology, and AI Governance
Brody, Shlomo. Ethics of Our Fighters: A Jewish View on War and Morality. Jerusalem: Koren/Maggid, 2023. The chapter on autonomous weapons surveys the talmudic prohibition on arms sales and its medieval and modern development, including the views of Meiri, Maimonides, Nimukei Yosef, Or Zarua, R. Chaim David Halevi, R. J. David Bleich, R. Yehuda Gershuni, and Dr. Meir Tamari; applies the tradition to Israeli arms industry ethics and, by extension, to the export of dual-use technologies.
Halevi, Chaim David. Aseh Lecha Rav 1:19 (late 1970s). Rules that Israel may sell weapons to allied nations on grounds of mutual strategic benefit, drawing on Maimonides and Meiri to argue that the medieval justifications apply to a sovereign Jewish state.
Bleich, J. David. "Survey of Recent Halakhic Periodical Literature." Tradition 20, no. 4. Rules that arms sales are permitted only when accompanied by a formal or informal security pact; absent such agreement, sales are forbidden unless required by necessity for self-defense.
AI Safety and CBRN Threat Assessment
Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Vintage, 2018. Devotes sustained attention to catastrophic AI risk scenarios, including the weaponization of AI for biological and nuclear threats, arguing that the window for establishing effective governance is narrow.
Asaro, Peter. "Autonomous Weapons and the Ethics of Artificial Intelligence." In Ethics of Artificial Intelligence, edited by S. Matthew Liao, 212-36. Oxford: Oxford University Press, 2020. Examines the ethics of autonomous weapons systems with attention to escalation dynamics and the risk that AI-enabled military capabilities could trigger catastrophic conflict.
Leveringhaus, Alex. Ethics and Autonomous Weapons. London: Palgrave Macmillan, 2016. Systematic treatment of the moral and legal challenges posed by lethal autonomous systems, including proportionality, discrimination, and the risk of arms races.
Sparrow, Robert. "Killer Robots." Journal of Applied Philosophy 24, no. 1 (2007): 62-77. Argues that autonomous weapons systems raise unique accountability problems because neither the programmer, the commanding officer, nor the machine itself can bear full moral responsibility for lethal decisions.
Theology of Catastrophe and Covenantal Responsibility
Jonas, Hans. The Imperative of Responsibility: In Search of an Ethics for the Technological Age. Chicago: University of Chicago Press, 1984. Argues that the technological capacity for civilizational destruction demands a new ethics of responsibility; his "heuristics of fear" proposes that when the stakes are existential, worst-case projections should guide policy.
Soloveitchik, Joseph B. The Lonely Man of Faith. Jerusalem: Maggid, 2012 [orig. 1965]. Soloveitchik's distinction between the majestic-creative and covenantal-relational dimensions of human existence provides a theological framework for understanding CBRN risk: unchecked technological mastery without covenantal humility leads to precisely the hubris the Flood narrative warns against.
Tirosh-Samuelson, Hava. "Transhumanism as a Secularist Faith." Zygon 47, no. 4 (2012). Argues that the technological pursuit of transcendence, when divorced from moral constraint, risks becoming a form of idolatry — a critique that applies with particular force to AI capabilities that could enable civilizational destruction.
Coming Soon!
Coming Soon!
Overview
The "hard problem of consciousness"—explaining why and how physical processes give rise to subjective experience, to there being "something it is like" to be a creature (Nagel 1974; Chalmers 1996)—is the most central question in contemporary philosophy of mind. It is also, for those thinking about artificial intelligence, perhaps the most consequential: if consciousness is what confers moral status, then whether AI systems can be conscious determines whether they can be moral patients deserving of ethical consideration, or merely sophisticated tools.
Philosophers and cognitive scientists have developed competing frameworks for understanding mind and its relationship to computation. Functionalist approaches hold that mental states are defined by their causal roles—their relationships to inputs, outputs, and other mental states—such that any system implementing the right functional organization would possess genuine mental states, regardless of substrate (Thagard 2005, 2019). On this view, sufficiently sophisticated AI could in principle be conscious. Behaviorist and deflationary accounts go further, suggesting that consciousness simply is sophisticated information processing, or that "consciousness" names nothing over and above certain functional capacities (Dennett 1991). Against these views, John Searle's Chinese Room argument (1984) contends that syntax (rule-governed symbol manipulation) can never produce semantics (genuine understanding): a computer executing a program may simulate intelligence without possessing it, just as someone following rules to manipulate Chinese characters need not understand Chinese. Searle has applied this argument directly to contemporary AI, arguing that even sophisticated systems lack genuine consciousness (Searle 2015). For accessible overviews of these debates and their implications for AI, see Thagard (2021) and Bentley et al. (2018).
Jewish thought, however, did not develop a concept of "consciousness" in the modern sense that dominates contemporary philosophy. The term itself is a post-Cartesian innovation, emerging from Locke's definition of consciousness as "the perception of what passes in a man's own mind" (1690). Prior to the Enlightenment, the relevant category was soul—and the Jewish discourse on soul, while rich and multilayered, operates with different assumptions and toward different ends than the modern philosophy of mind. See entries on Humans, Souls and Minds, and Intentionality.
That said, certain parallels can be drawn. Philosophers of mind often distinguish between phenomenal consciousness (subjective experience, qualia) and access consciousness (the functional availability of information for reasoning, reporting, and behavior control). Some have further distinguished between first-order consciousness (awareness of external stimuli) and second-order or "higher-order" consciousness (awareness of one's own mental states, reflexivity, inner speech). Later kabbalistic and hasidic sources distinguish between multiple levels of soul—nefesh, ruach, neshamah, ḥayah and yeḥidah—and associate different capacities with each. Some Jewish thinkers linked the distinctively human soul to da'at (knowledge/understanding) and dibbur (speech), capacities that track loosely onto what philosophers now call higher-order cognition. Some have proposed mapping these concepts onto artificial minds (Navon 2024a, 2024b), but these readings and their ethical implications are certainly debatable.
Secondary Sources
Philosophy of Mind
-
Chalmers, David J. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press, 1996. The canonical formulation of the "hard problem"; argues that consciousness cannot be explained by functional or computational accounts alone.
-
Dennett, Daniel C. Consciousness Explained. Little, Brown, 1991. The leading functionalist account; argues that consciousness is sophisticated information processing, with implications for AI possibility.
-
Nagel, Thomas. "What Is It Like to Be a Bat?" Philosophical Review 83, no. 4 (1974): 435-450. Classic argument that subjective experience cannot be captured by objective, third-person accounts.
-
Searle, John R. Minds, Brains and Science. Harvard University Press, 1984. Main text on consciousness and the philosophy of mind. Presents the Chinese Room thought experiment to argue that computation alone cannot produce understanding.
-
Thagard, Paul. Brain-Mind: From Neurons to Consciousness and Creativity. Oxford University Press, 2019. Integrates neuroscientific and philosophical approaches.
Modern AI and Consciousness
-
Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger. "Should We Fear Artificial Intelligence?" European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018. Available online. Policy-oriented overview of AI consciousness and risk.
-
Searle, John R. "Consciousness in Artificial Intelligence." Talks at Google, 2015. YouTube video. Searle applies his arguments to contemporary AI systems.
-
Thagard, Paul. Bots and Beasts: What Makes Machines, Animals, and People Smart? MIT Press, 2021. Accessible treatment of intelligence across biological and artificial systems.
Jewish Thinking on Consciousness and Artificial Intelligence
-
Lorberbaum, Yair. In God's Image: Myth, Theology, and Law in Classical Judaism. Cambridge University Press, 2015. The definitive study of tzelem Elohim (image of God) in rabbinic and medieval Jewish thought; essential for understanding how Jewish sources conceptualized human distinctiveness without recourse to "consciousness."
-
Mittleman, Alan L. Human Nature & Jewish Thought: Judaism's Case for Why Persons Matter. Princeton University Press, 2015. Survey of modern Jewish thinkers on human nature and its ethical implications.
-
Navon, Mois. "To Make a Mind—A Primer on Conscious Robots." Theology and Science 22, no. 1 (2024a): 224-241. https://doi.org/10.1080/14746700.2023.2294530. Proposes mapping Jewish soul categories onto orders of phenomenal consciousness.
-
Navon, Mois. "Let Us Make Man in Our Image: A Jewish Ethical Perspective on Creating Conscious Robots." AI Ethics 4 (2024b): 1239-1250. https://doi.org/10.1007/s43681-023-00328-y. Expounds upon the framework proposed in Navon 2024a and develops its ethical implications.
Coming Soon!
Coming Soon!
Overview
The rapid expansion of artificial intelligence infrastructure imposes environmental costs that are substantial and growing. The International Energy Agency projects that global data center electricity consumption could surpass 1,000 terawatt-hours by 2026, placing data centers between Japan and Russia in total electricity demand (IEA 2024). U.S. data center electricity consumption alone tripled between 2014 and 2023, from roughly 60 to 176 terawatt-hours, with projections of 250 to 400 terawatt-hours by 2028 (LBNL 2024). Water consumption follows a parallel trajectory: U.S. data centers consumed an estimated 17.5 billion gallons for cooling in 2023 (LBNL 2024; Ren et al. 2024), with localized stress on freshwater systems near data center clusters. Per-query figures circulated by AI companies themselves (Altman 2025; Google 2025) are self-reported and not independently verified, and make up huge energy costs in aggregate. More fundamentally, per-task comparisons assume substitution, but the economically relevant model may be expansion. AI generates vastly more content than would otherwise exist, and the rebound effect might suggests that per-unit efficiency gains tend to increase rather than decrease total resource consumption. The central environmental concern is thus novel demand at scale, not efficiency compared to present activity.
Jewish tradition may address these costs through two distinct but complementary frameworks. The first is bal tashchit, the prohibition against wanton destruction derived from Deuteronomy 20:19-20 and extended by the rabbis to all purposeless waste. Maimonides codifies it broadly (Hilkhot Melakhim 6:10): "anyone who breaks vessels, tears garments, destroys a building, stops up a spring, or wastes food destructively violates bal tashchit." The Sefer ha-Hinnukh (Precept 529) grounds the prohibition in a broader orientation: the righteous "do not waste even a grain of mustard in the world." This ethic could be seen as anchored in God's original directive to humankind, Genesis 2:15 to work the earth but also to guard over it (le'ovdah u'leshomrah), which may be associated with a midrash in Kohelet Rabbah 7:13 exhorting man to "be careful" not to destroy the world, for "if you damage it, there is no one to repair it after you."
Bal tashchit permits destruction that serves a legitimate economic or other purpose, such as the cutting down a fruit tree may be cut if its wood is more valuable than its fruit (Bava Kamma 91b-92a), or if the location is needed for building (Rosh, Bava Kamma 8:15; Ḥavvot Ya'ir 195). The Ḥatam Sofer (ḤM 102) restricts the leniencies to genuinely significant needs such as housing, reasoning that destroying something valuable for a trivial purpose remains hashchatah gemurah, absolute (or perhaps 'gratuitous') destruction. The Beit Yitzchak (YD 1:142) and the Dovev Meisharim (2:42) further argue that even partial destruction of environmental resources falls under the prohibition of ḥatzi shi'ur (Yoma 74a). R. Asher Weiss, quoting quoting Shulḥan Arukh ha-Rav (Dinei Bal Tashchit 10:14) applies this prohibition even to the destruction of ownerless property; this logic may be extended, in theory, to shared ecological resources.
An entirely different framework is the halakhic tradition of enforceable communal rights. The Torah mandates a permanent open space (migrash) around Levitical cities (Numbers 35:2-5; Arakhin 33b), which Rashi explains as noy la'ir, a structural commitment that economic development must not consume every available resource. R. Samson Raphael Hirsch reads this legislation as reflecting a symbolic vision for how geographic communinites should be creating boundaries around their material expansion. More concretely, the Mishnah (Bava Batra 2:8-9) requires noxious industries to be distanced from population centers and situated downwind, and Maimonides codifies these rules as enforceable by the affected locales (Hilkhot Shkhenim 10-11). The Talmud's discussion of shared infrastructure costs (Bava Batra 7b-8a) establishes that burdens and benefits are distributed in proportion to proximity. Applied to AI infrastructure, this framework suggests that communities bearing the environmental burden of data centers hold not merely moral but enforceable standing to participate in siting decisions and demand mitigation.
Taken together, these sources yield a Jewish approach that treats this newly growing demand for energy and its associated costs (both ecological and as incurred by geographic proximity) as serious concerns. While these costs may be justified by genuinely significant need, affected communities must be given a voice in the decisions that impose those costs.
Primary Sources
Genesis 1:28. "Fill the earth and subdue it." The mandate for human stewardship of the natural world, traditionally understood not as license for unlimited exploitation but as delegated authority carrying responsibility. The verb kivshuha ("subdue it") implies purposeful management, not wanton consumption. Link: Sefaria
Genesis 2:15. "The Lord God took the man and placed him in the Garden of Eden, le'ovdah u'leshomrah - to work it and to guard it." The foundational text for the dual mandate of productive use and conservation; the human role is both to develop the world and to preserve it. Link: Sefaria
Leviticus 25:23. "The land shall not be sold permanently, for the land is Mine; you are but strangers and sojourners with Me." Establishes that human ownership of natural resources is conditional and custodial, not absolute. Link: Sefaria
Numbers 35:2-5. The legislation requiring that Levitical cities be surrounded by a belt of open space (migrash) of specified dimensions. The migrash may not be built upon or cultivated, establishing a legal precedent for preserving open land around urban centers as a permanent structural feature of responsible city planning. Link: Sefaria
Deuteronomy 20:19-20. "When you besiege a city... you shall not destroy its trees by wielding an axe against them; for you may eat of them, and you shall not cut them down. Is the tree of the field a man, that it should be besieged by you?" The source of the prohibition of bal tashchit, which the rabbis extended from wartime destruction of fruit trees to a comprehensive prohibition against all wanton waste. Link: Sefaria
Psalm 24:1. "The earth is the Lord's, and the fullness thereof; the world and those who dwell in it." The theological grounding of environmental stewardship: the natural world belongs to God, and human use of it must reflect accountability to its Owner. Link: Sefaria
Kohelet Rabbah 7:13. "When the Holy One created the first human, He took him and led him past all the trees of the Garden of Eden and said: 'See My works, how beautiful and praiseworthy they are. Everything I have created, I created for you. Pay attention that you do not damage or destroy My world, for if you damage it, there is no one to repair it after you.'" The most explicit midrashic statement of the duty of environmental stewardship and of intergenerational responsibility for the integrity of the created world. Link: Sefaria
Mishnah Bava Batra 2:8-9. Rules requiring that tanneries and other noxious industries be distanced at least fifty cubits from a city and situated so that prevailing winds carry harmful byproducts away from inhabited areas. Establishes the principle that harmful externalities must be controlled through enforceable spatial regulations. Link: Sefaria
Babylonian Talmud, Bava Batra 7b-8a. Discussion of the communal obligation to contribute to shared infrastructure - walls, gates, and a beit sha'ar (gatehouse). Contributions are assessed in proportion to proximity and benefit, establishing a framework for equitable distribution of costs associated with large-scale communal projects. Links: 7b; 8a
Babylonian Talmud, Bava Kamma 91b-92a. Central sugya on the scope of bal tashchit: establishes that a date palm damaging a grapevine may be removed because the grape is more valuable; discusses whether self-harm violates bal tashchit; and records the tradition that R. Ḥanina's son died because he cut down a fig tree prematurely, establishing that the prohibition carries not only legal but also spiritual gravity. Link: Sefaria
Babylonian Talmud, Yoma 74a. The principle that ḥatzi shi'ur (a partial measure of a prohibited act) is forbidden on a Torah level. Applied by the Beit Yitzchak and the Dovev Meisharim to bal tashchit: even partial destruction of a resource (cutting branches, not the trunk) falls within the prohibition's scope. Link: Sefaria
Babylonian Talmud, Arakhin 33b. The ruling that the migrash surrounding Levitical cities may not be converted into either built-up area or agricultural fields. Establishes the permanence of urban green space as a legal requirement, not merely a recommendation. Link: Sefaria
Maimonides, Mishneh Torah, Hilkhot Melakhim 6:8-10. The comprehensive codification of bal tashchit, extending the prohibition from trees to all forms of purposeless destruction - breaking vessels, tearing garments, demolishing buildings, stopping up springs, and wasting food. Read alongside Sefer ha-Mitzvot (Negative Commandment 57), where Maimonides classifies all destruction as a Torah-level violation, this raises the question of whether non-arboreal waste is prohibited by Torah law or rabbinic enactment. Link: Sefaria
Maimonides, Sefer ha-Mitzvot, Negative Commandment 57. "Anyone who burns a garment for no purpose or breaks a vessel also transgresses lo tashchit and receives lashes." In tension with Hilkhot Melakhim 6:10 which assigns only rabbinic lashes for non-tree destruction. Link: Sefaria
Maimonides, Mishneh Torah, Hilkhot Shkhenim 10-11. Codification of the laws governing the distancing of harmful activities from residential areas, including specific requirements for the siting of noxious industries and the rights of affected neighbors to demand mitigation. Link: Sefaria
Sefer ha-Hinnukh, Precept 529. The rationale for bal tashchit: "The root of this commandment is known - it is to teach us to love the good and the beneficial and to cling to it, and through this, goodness will cling to us and we will distance ourselves from all that is destructive and damaging." Link: Sefaria
Shulḥan Arukh ha-Rav, Dinei Shemirat ha-Guf u-Val Tashchit 10:14-15. Rules that bal tashchit applies to ownerless property a fortiori; also permits cutting a fruit tree that blocks light from a dwelling, following the Ḥavvot Ya'ir. Link: Wikipedia. Section link not found.
Ḥatam Sofer, Responsa, Ḥoshen Mishpat 102. Limits the Rosh's leniency to genuinely significant needs such as housing, ruling that destroying something valuable for a trivial purpose remains hashchatah gemurah (complete destruction). Link: Sefaria
Noda Bi-Yehudah (R. Yeḥezkel Landau), Responsa, Yoreh De'ah 10. Understands Maimonides' assignment of makkot mardut for non-tree destruction as indicating a rabbinic prohibition. This reading, shared by the Ḥayyei Adam (11:32) and the Maharit Bassan (101), is challenged by R. Asher Weiss's analysis. Link: Sefaria
Beit Yitzchak (R. Yitzḥak Schmelkes), Yoreh De'ah 1:142; Dovev Meisharim 2:42. Argue that partial destruction of a fruit tree violates bal tashchit under the principle of ḥatzi shi'ur. Links: Beit Yitzchak (Google Books); Dovev Meisharim (Google Books)
R. Avraham Yitzḥak haKohen Kook, "Hazon haTzimhonut vehaShalom" (Lahai Ro'i, Jerusalem, 1961, p. 207). Develops a framework for considering how man relates to the natural world.
R. Samson Raphael Hirsch, Commentary on Numbers 35. Extended analysis of the Levitical city legislation as a model for responsible urban planning.
Secondary Sources
Jewish Environmental Ethics
Tirosh-Samuelson, Hava, ed. Judaism and Ecology: Created World and Revealed Word. Cambridge, MA: Harvard University Press, 2002. Comprehensive collection examining the ecological dimensions of Jewish thought from biblical through contemporary periods. Link: Google Books
Benstein, Jeremy. The Way Into Judaism and the Environment. Woodstock, VT: Jewish Lights Publishing, 2006. Accessible introduction to Jewish environmental ethics drawing on biblical, rabbinic, and contemporary sources. Link: Google Books
Vogel, David. "How Green Is Judaism? Exploring Jewish Environmental Ethics." Business Ethics Quarterly 11, no. 2 (2001): 349-63. Critical assessment arguing that the tradition does not straightforwardly yield a modern environmental ethic without significant interpretive work. Link: Cambridge Core
Lamm, Norman. "Ecology in Jewish Law and Theology." In Faith and Doubt: Studies in Traditional Jewish Thought, 162-85. New York: KTAV, 1971. Early and influential Orthodox rabbinic engagement with environmental ethics. Link: Google Books
Levi, Yehudah. Facing Current Challenges Jerusalem: Hemed Books, 1998. Includes a chapter with several sources on environmental concerns from traditional Jewish sources.
Bal Tashchit and Halakhic Environmental Regulation
Weiss, R. Asher (Minḥat Asher). "Lo Tashchit Et Etzah" [Heb.] (5770/2013). Comprehensive halakhic analysis resolving the apparent contradiction in the Rambam and addressing ḥatzi shi'ur, ownerless property, and permissible grounds for cutting. Essential for rigorous halakhic treatment of environmental destruction.
Ḥavvot Ya'ir (R. Ya'ir Ḥayyim Bacharach), Responsum 195. Permits cutting a fruit tree when it blocks light from a dwelling, extending the Rosh's leniency to include diminished amenity. Link: HaMakor index
Rosh (R. Asher b. Yeḥiel), Bava Kamma 8:15. Rules that one may cut down a fruit tree if one needs the location for building. Link: Sefaria
Bleich, J. David. "Survey of Recent Halakhic Periodical Literature." Tradition (various issues). Periodically addresses environmental questions through the lens of bal tashchit and related halakhic categories.
Rakover, Nahum. "Environmental Protection in Jewish Sources." [Heb.] Tehumin 13 (1993): 301-15. Survey of rabbinic sources relevant to environmental regulation, including the Talmudic zoning laws of Bava Batra.
Rakover, Naḥum. Environmental Protection: A Jewish Perspective. Israel: Institute of the World Jewish Congress, 1996. Collection and analysis of relevant Jewish sources on the values and parameters of environmentalism.
AI Energy Consumption and Environmental Impact
Lawrence Berkeley National Laboratory. 2024 United States Data Center Energy Usage Report. Berkeley, CA: LBNL, 2024. Documents the tripling of U.S. data center electricity use from ~60 TWh in 2014 to 176 TWh in 2023 and projects 250-400 TWh by 2028. Link: LBNL
International Energy Agency. Electricity 2024: Analysis and Forecast to 2026. Paris: IEA, 2024. Projects global data center electricity use could exceed 1,000 TWh by 2026. Link: ManagEnergy record
"Is Almost Everyone Wrong About America's AI Power Problem?" Gradient Updates (Epoch AI), 2025. AI-industry-funded analysis arguing the power supply challenge is more tractable than assumed; valuable quantitative framework but does not address emissions, water, or localized burdens.
Ren, Shaolei, et al. "Making AI Less 'Thirsty': Uncovering and Addressing the Secret Water Footprint of AI Models." Communications of the ACM 67, no. 12 (2024). Methodology for estimating AI's total water footprint including operational and embodied water. Link: CACM
Tomlinson, Bill, et al. "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans." Scientific Reports 14 (2024): 3732. Per-task comparison finding AI produces far less CO2 per page than humans; does not account for the rebound effect or novel demand.
O'Donnell, James, and Casey Crownhart. "We Did the Math on AI's Energy Footprint. Here's the Story You Haven't Heard." MIT Technology Review, May 20, 2025. Independent measurement of per-query energy consumption across multiple models; finds significant variation by model size and query type. Link: MIT Technology Review
Strubell, Emma, Ananya Ganesh, and Andrew McCallum. "Energy and Policy Considerations for Deep Learning in NLP." Proceedings of the 57th Annual Meeting of the ACL (2019): 3645-50. Foundational study quantifying energy costs of training large neural networks. Link: ACL Anthology
Urban Planning, Infrastructure, and Environmental Justice
Sacks, Jonathan. To Heal a Fractured World: The Ethics of Responsibility. New York: Schocken Books, 2005. Jewish ethic of collective responsibility with implications for environmental stewardship.
Crawford, Kate, and Vladan Joler. "Anatomy of an AI System: The Amazon Echo as an Anatomical Map of Human Labor, Data, and Planetary Resources." AI Now Institute, September 2018. Mapping of the full material supply chain behind a single AI device. Link: Anatomy of an AI System
Lepri, Bruno, Nuria Oliver, Emmanuel Letouze, Alex Pentland, and Patrick Vinck. "Fair, Transparent, and Accountable Algorithmic Decision-Making Processes." Philosophy & Technology 31, no. 4 (2017): 611-27. Framework for algorithmic accountability applicable to environmental impact disclosures. Link: DeepDyve
Coming Soon!
Coming Soon!
Coming Soon!
Overview
The term "golem" (גולם) appears only once in the Hebrew Bible (Psalms 139:16), where it refers to the Psalmist's unformed substance as seen by God. In rabbinic literature, the word denotes a human body or formed—though not yet perfected—entity, as in Mishnah Avot 5:7, where the golem (a person lacking wisdom) is contrasted with the ḥakham (sage). In these early sources, as Moshe Idel has demonstrated, the word consistently referred to a human body or a human-shaped figure. The term came to designate an artificially created anthropoid only gradually; the earliest explicit use of "golem" for a magically animated creature appears in tenth-century Italian sources (Megillat Aḥima'atz), where it describes a corpse temporarily reanimated through the divine name. The full identification of "golem" with the magically created anthropoid became standard only by the seventeenth century.
Even if they did not use the term, however, the rabbis of the Talmud still discussed the possibility of creating artificial humans. A key passage is Sanhedrin 65b, which reports that Rava created a man (gavra) and sent him to Rabbi Zeira, who upon discovering the creature could not speak, ordered it to "return to dust." The same section relates that Rav Ḥanina and Rav Oshaya would study Sefer Yetzirah every Sabbath eve and thereby create a calf, which they would then eat. These accounts established a lasting association between esoteric knowledge (particularly of divine names and letter combinations), creative power, and the question of what distinguishes artificial from natural life. The creature's muteness served as the touchstone of its non-human status—a theme that persists throughout the tradition and raises enduring questions about the relationship between embodiment, cognition, and linguistic capacity.
The golem tradition developed significantly in medieval Ashkenaz, where commentators on Sefer Yetzirah—especially Eleazar of Worms and other Ḥasidei Ashkenaz—elaborated detailed rituals for anthropoid creation through letter permutation and the inscription of divine names. These texts introduced the famous motif of animating the golem by inscribing emet (truth/אמת) on its forehead and deanimating it by erasing the first letter to leave met (death/מת). This binary operation of creation and destruction through symbolic manipulation represents a striking anticipation of computational logic. The famous legend of Maharal of Prague and his protective golem, despite its cultural ubiquity, is a nineteenth-century invention with no basis in contemporaneous sources.
The golem has served as a lens for thinking about artificial intelligence since at least the 1960s, when Norbert Wiener titled his meditation on the ethical implications of cybernetics God & Golem, Inc. (1964), and in 1965, Gershom Scholem explicitly compared the golem to the computer in his address at the Weizmann Institute.
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Overview
Jews' engagement with artificial technologies is, by necessity, as old as Judaism itself; the earliest biblical passages discuss products of human industriousness (e.g., Genesis 4:20-21). Thus, historians may utilize tools such as archeology to understand the material landscape of past Jewish (and non-Jewish) societies to better appreciate the role of technology in their lives and interpret their texts accordingly (Hezser 2010). When it comes to the question of how new technologies impact Jewish law or custom, it would not be an exaggeration to say that Jewish legal writings on the topic amount to thousands upon thousands of books. Zomet, a single Israeli organization dedicated to such studies, has (as of this writing) published 45 volumes of collected articles, and merely perusing through its list provides a good overview of the rabbinic discourse on technology over the past century. A noteworthy recent addition to this massive library is Ziring's halakhic analysis of communications technology (Ziring 2024), which bears directly on questions relating to modern media and, by extension, AI-mediated communication.
However, nearly all of this halakhic literature is preoccupied with the minutiae of how specific technologies impact or interact with various details of Jewish law; someone uncharitable may characterize it as a million variations upon the question "may this device be used on the Shabbat?" The question of how Jews reacted theologically to the innovations that have made our twenty-first-century world unrecognizable to our ancestors is shockingly understudied, even in the context of medieval and early modern attitudes generally (White 1962, 1978). Attitudes toward material innovation can also be inferred from the multifaceted halakhic literature reacting to newly invented devices (cf. Halpern 2012), but such studies are rare and will require significant work, as halakhic authors rarely make such methodological or philosophical orientations explicit.
Exceptions to this general scholarly lacuna are limited to studies of specific innovations, such as the Jewish reception of the *printing press or the Copernican Revolution in astronomy (Brown 2014). Another set of useful resources are biographies of figures who engaged substantively with technological and scientific questions, such as Yosef Shlomo Delmedigo, a seventeenth-century rabbi, physician, and polymath (Barzilay 1974; Adler 1997). Other Jewish inventors and tinkerers were mostly less affiliated with the rabbinic elite and therefore have smaller literary legacies, but recent scholarship has brought more of these fascinating figures to light (Patai 1994; Ruderman 1988), and additional material can be found in the growing body of work studying Jews' relationship to the sciences (Ruderman 1995; Efron 2007). A few smaller treatments of the topic (Lubin 2016, Perl 2022, Navon 2024) can help guide future scholarship, but substantial work remains to be done, especially as widespread adaptation of Artificial Intelligence makes this discussion more urgent.
Despite the dearth of secondary literature on this crucial topic, there are ample references and remarks from classical rabbinic sources that can be marshaled to develop a Jewish worldview on technology, but they do not all speak with one voice. A robust strand of rabbinic thought celebrates material advancement as completing God's creation. Bereishit Rabbah (11:6) has Rabbi Hoshaya explain to a philosopher that everything created during the six days requires human finishing: "the mustard needs sweetening, the wheat needs grinding, and even a person needs tikkun." A similar Midrash records the exchange with Rabbi Akiva who said that "man's works" are more beautiful than God's, comparing a loaf of bread to grain, and applies this way of thinking to the command of circumcision. Rabbi Lowe of Prague, or 'Maharal" (Be'er ha-Golah, Be'er 2) develops this more fully, and R. Yisrael Lifschitz (Tifferet Yisrael, Boaz to Avot 3:1) extends this celebration into the modern era, praising inventors like Jenner (vaccination) and Gutenberg (the printing press) as exemplars of the righteous among the nations who save lives through ingenuity.
On the halakhic plane, the Sefer ha-Ḥinukh (Mitzvah 62) and the Meiri (on Sanhedrin 67b) define the permissible domain broadly: anything achieved through natural means is not forbidden sorcery "even if one knew how to create living creatures of beautiful form" (see also *Magic). Rabbeinu Baḥya ibn Paquda (Ḥovot ha-Levavot, Sha'ar ha-Perishut, ch. 1) goes further, arguing that occupational diversity is itself a divine institution: "there is no way for the world to endure unless all people engage in wisdom together and not in a single craft, for the perfection of the order of the world requires it." Some of these discussions also center around the human role in *creation, see entry there.
But a powerful counter-tradition views the same innovative impulse with suspicion. The Abarbanel (on Genesis 11:1-3) reads the entire story arc from Cain's descendants, who first pursued luxury crafts and urban construction (Genesis 4:20-22) through to the Tower of Babel as one of escalating technological hubris: the builders sought to channel all human ingenuity toward material self-sufficiency independent of divine providence. Ramban (on Leviticus 19:19) sees in the prohibition to crossbreed vines a value to abstain from meddling too much in God's natural world; and whoever crossbreeds them "changes and denies the act of Creation" (meshaneh u'makhchish b'ma'aseh bereishit). The Maharal attempts to reconcile these positions, distinguishing "natural potential" to transgressing inter-species boundaries, but the tension remains.
This tension also has a practical dimension in different rabbinic attitudes towards interpreting some crucial rabbinic statements regarding medicine. The Mishnha and Gemara recount (Pesachim 56a) that the Sages praised King Hezekiah for hiding a "Book of Remedies" (Sefer ha-Refu'ot), but the commentators disagree sharply about why. Rashi explains that people relied on the book instead of praying to God, and it was suppressed to restore dependence on the divine; Rashba reflects a similar sensibility when he notes that without explicit biblical warrant, one might have thought healing means "nullifying what has been decreed from Heaven" and Ramban indicates the same. Maimonides (Commentary on the Mishnah, Pesachim 4:9), however, dismisses this rationale as nonsensical— no one would forbid a hungry person from eating because he should rely on God. He instead argues the book must have contained knowledge of dangerous poisons, and was suppressed to prevent misuse. These divergent readings of the same episode illustrate the breadth of classical Jewish attitudes toward powerful knowledge, ranging from deep wariness of human overreach to pragmatic concern with preventing specific harms, and together they provide a rich framework for evaluating the promises and dangers of artificial intelligence (see Primary Sources, linked also below; see also Navon 2024; Goltz, Zeleznikow, and Dowdeswell 2020).
Primary Source Sheet
Secondary Sources
Jewish History and Material Culture
-
Hezser, Catherine. "The Material of Ancient Jewish Daily Life." In The Oxford Handbook of Jewish Daily Life in Roman Palestine, edited by Catherine Hezser. Oxford University Press, 2010. Comprehensive survey of rabbinic engagement with material culture; essential background on historical methodology for studying technology in Jewish antiquity.
-
Sperber, Daniel. "The Use of Archaeology in Understanding Rabbinic Materials: A Talmudic Perspective." In Talmuda De-Eretz Israel: Archaeology and the Rabbis in Late Antique Palestine, edited by Steven Fine and Aaron Koller, 321–346. De Gruyter, 2014. Methodological guide to integrating material evidence with textual sources.
Jews and Science
-
Brown, Jeremy. New Heavens and a New Earth: The Jewish Reception of Copernican Thought. Oxford University Press, 2013. Traces Jewish responses to the Copernican Revolution across halakhic, philosophical, and kabbalistic registers; demonstrates the range of strategies available for accommodating disruptive scientific innovations.
-
Efron, Noah. Judaism and Science: A Historical Introduction. Greenwood Press, 2007. Accessible survey of the full sweep of Jewish engagement with natural philosophy and science; useful orientation to the field.
-
Efron, Noah J. "Irenism and Natural Philosophy in Rudolfine Prague: The Case of David Gans." Science in Context 10, no. 4 (1997): 627–649. Study of an early modern Jewish astronomer navigating between Jewish tradition and the new science in a cosmopolitan imperial setting.
-
Harrison, Peter, ed. The Routledge Companion to Religion and Science. Routledge, 2012. Comprehensive reference work with several chapters on Jewish involvement in science and the impact of scientific developments on Jewish thought.
-
Ruderman, David B. Jewish Thought and Scientific Discovery in Early Modern Europe. Yale University Press, 1995. Foundational study of how early modern Jewish intellectuals negotiated between traditional learning and new scientific knowledge.
Modern Science and Technology in Halakhic Sources
-
Halperin, Mordechai. Refu'ah, Metzi'ut, v'Halakhah—U'lshon Ḥakhamim Marpei [Medicine, Reality, and Halakha]. 2012. [Hebrew] Responsa and essays by a leading authority on medical halakha; models how halakhic reasoning adapts to technological change.
-
Kahana, Maoz. From the Noda BiYehuda to the Ḥatam Sofer: Halakha and Thought Facing the Challenges of the Time [Hebrew]. Zalman Shazar, 2015. Intellectual history of how major halakhic authorities in the eighteenth and nineteenth centuries responded to modernity.
-
Kahana, Maoz. A Heartless Chicken and Other Wonders: Religion and Science in Early Modern Rabbinic Culture [Hebrew]. Bialik Publishing, 2021. Examines how eighteenth-century rabbis processed scientific anomalies and discoveries; directly relevant to questions of how halakha might respond to AI.
-
Tirosh-Samuelson, Hava, and Aaron W. Hughes, eds. J. David Bleich: Where Halakhah and Philosophy Meet. Brill, 2015. Essays on a major contemporary halakhic authority known for his engagement with medical ethics and technology.
Jewish Attitudes toward Technology
-
Lamm, Norman. "The Religious Implications of Extraterrestrial Life." Tradition 7, no. 4 (1965). Available online. Early Orthodox engagement with speculative technology and its theological implications; models how traditional thinkers might approach AI.
-
Lubin, Matt. "Bricks and Stones: On Man's Subdual of Nature." Kol Hamevaser 9, no. 2 (2016). Available online. Student essay exploring Jewish theological frameworks for human technological activity.
-
Navon, Mois. "A Jewish Theological Perspective on Technology (Orthodox)." In St Andrews Encyclopaedia of Theology, edited by Brendan N. Wolfe et al. University of St Andrews, 2024. Available online. Concise overview of Orthodox Jewish approaches to technology, including traditional and contemporary sources.
-
Perl, Elimelekh Y. "Jewish and Western Ethical Perspectives on Emerging Technologies." Undergraduate honors thesis, Yeshiva University, 2022. Available online. Comparative analysis of Jewish and secular ethical frameworks for evaluating new technologies.
-
White, Lynn, Jr. Medieval Religion and Technology: Collected Essays. University of California Press, 1978. Influential arguments about religious attitudes shaping technological development; frames comparative questions about Jewish distinctiveness.
-
Ziring, Jonathan. Torah in a Connected World: A Halakhic Perspective on Communication Technology and Social Media. Maggid Books, 2024. Contemporary halakhic treatment of digital technology; models the application of traditional legal reasoning to new technological contexts.
Social and Cultural Studies
-
Dowdeswell, Tracey, and Nachshon Goltz. "Cultural Regulation of Disruptive Technologies: Lessons from Orthodox Religious Communities." Journal of Transportation Law, Logistics, and Policy 88, no. 1 (2021): 33–44. Case study of how Orthodox communities govern technology adoption; applicable to communal AI governance.
-
Neriya-Ben Shahar, Rivka. Strictly Observant: Amish and Ultra-Orthodox Jewish Women Negotiating Media. Rutgers University Press, 2024. Comparative study of how traditional religious communities selectively adopt and adapt communication technologies.
Individual Figures
-
Adler, Jacob. "J.S. Delmedigo and the Liquid-Glass Thermometer." Annals of Science 54 (1997): 293–299. Technical study of an early modern Jewish scientist's contribution to instrumentation.
-
Barzilay, Isaac. Yoseph Shlomo Delmedigo (Yashar of Candia): His Life, Works, and Times. Brill, 1974. Biography of a pivotal figure who moved between traditional rabbinic learning and experimental science; illustrates tensions and possibilities in early modern Jewish technological engagement.
-
Neher, André. Jewish Thought and the Scientific Revolution of the Sixteenth Century: David Gans (1541–1613) and His Times. Oxford University Press, 1986. Study of an early modern Jewish astronomer who sought to harmonize traditional learning with new cosmology.
-
Ruderman, David B. Kabbalah, Magic, and Science: The Cultural Universe of a Sixteenth-Century Jewish Physician. Harvard University Press, 1988. Study of Abraham Yagel that explores the intersection of mysticism, medicine, and natural philosophy.
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!