Welcome to the Handbook of Jewish AI Ethics. These research guides provide in-depth explorations of key topics at the intersection of artificial intelligence and Jewish thought. Each entry serves as both a mini encyclopedia article and an annotated bibliography. Each entry is also associated with a Sefaria Source Sheet of primary sources that may be relevant to the question. Together these should help offer context, analysis, and resources for further studies. As always, be sure to contact us with any feedback and/or additional resources you'd like to see here.
Click on any topic below to read its full entry, or use the search bar (top right) to find specific terms across all entries.
Overview
Sheliḥut (agency) is the halakhic framework by which one person may act on behalf of another. The principle, phrased by the Talmud as "a person's agent is like himself" (שלחו של אדם כמותו, sheluḥo shel adam ke-moto; Berakhot 34b; Kiddushin 41b), allows legally binding actions performed by an agent proxy (shaliaḥ) to be attributed to the principal (meshaleaḥ). This framework governs commercial transactions, marriage contracts, and even some but not all ritual obligations.
The matter of sheliḥut raises the question of whether autonomous or semi-autonomous nonhuman systems can serve as halakhic agents, and whether their actions can be meaningfully attributed to human principals. When it comes to ritual fulfillment of commandments, at least, application to AI is constrained by a fundamental requirement: only one who is bar ḥiyuva (subject to legal obligation) can serve as an agent. Presumably, since an AI is not commanded to observe mitzvot, then under classic halakhic reasoning, “objects can never be proxies” (Kalman 2024).
However, it is possible that this limitation is applicable only to ritual commandments; in fact, the Talmud does suggest some equivalence or at least a comparison between a person’s agent and their courtyard, which is able to legally acquire property for its owner within its domain (Bava Metzia 10b-11b, Sefer ha-Mikneh 15:4-8). Medieval and modern commentators have discussed to what extent, if any, the rabbis extend the legal principle of shelihut to this specific inanimate object, vis. a person’s real property (Tosafot Rosh and Rashba to Kiddushin 42a). While minors and the mentally incompetent are precluded from serving as valid agents due to their insufficient da’at (knowledge or intent), the terminology of sheliḥut used in the context of property indicates some flexibility for applications which are sufficiently dissimilar from human examples (cf. Nimukei Yosef and Pnei Yehoshua to Bava Metzia 11a).
Whether or not any vestige of the halakhic concept of sheliḥut applies to artificially constructed agents, the extensive literature on this topic from traditional sources may prove useful for developing theories of AI liability and similar issues. For example, the Talmudic (Gittin 29a-b) discusses the case of a principal (specifically, a husband seeking to divorce his wife) who dies before the agent has discharged his duty, and the ensuing discourse shows how halakhic authorities have considered to what extent the principal must be continuously ‘interested’ (even subconsciously) in the agency of his proxy and how completely authority may have been transferred from one person to the other (Klein 2024).
Primary Sources
-
Babylonian Talmud: Berakhot 34b; Gittin 29a-b; Kiddushin 41b; Bava Metzia 10a-11b.
-
Ketzot HaḤoshen, siman 182, 188. and Sefer ha-Mikneh 15:4-8. Key discussions of the nature of agency and the authority transferred to agents; analyzes whether the agent acts as an extension of the principal or with independently delegated power.
Secondary Sources
-
Klein, Dov. "Mingnon HaSheliḥut." [Hebrew] Yarhon Ha-Otzar 105 (2024): 449-464. Extensive treatment of the conceptual underpinnings of shelihut through rabbinic literature; briefly but directly addresses whether AI can fit within these frameworks.
-
Kalman, David Zvi. "Artificial Intelligence and Jewish Thought." In The Cambridge Companion to Religion and Artificial Intelligence (2024): 69-87. Argues that sheliḥut, animal-liability, and grama frameworks appear applicable to AI but remain underdetermined; cautions against premature systematization.
Overview
AI systems now make or inform consequential decisions in hiring, lending, criminal sentencing, healthcare, housing, and education. When these systems encode historical prejudices, having been "trained on" or having "learned from" data that reflects centuries of discrimination, they risk entrenching these injustices and perpetuating them behind a veneer of objectivity. For example, a hiring algorithm trained on past decisions learns to replicate the biases against hiring certain otherwise qualified individuals merely because they belong to a certain social class or ethnic group.
The core problem is that these systems treat individuals primarily as members of statistical reference classes, which is quite literally a matter of prejudice – it pre-judges the worth or fit of an individual instead of assessing their true merits or conduct. A person seeking a loan, a job, or parole is evaluated not as an individual but as an instance of a pattern derived from historical data. If that history includes redlining, discriminatory hiring, or biased policing, the algorithm learns to perpetuate those patterns. The system appears neutral—it is "just math"—but it automates the prejudices of the past (O'Neal 2016, Christian 2020).
Jewish tradition offers substantial resources for thinking through these problems. The Torah repeatedly commands that judges be impartial, and warns against favoring or disfavoring litigants based on wealth, status, or social position. Before turning to these procedural safeguards, it is worth addressing a more fundamental question: what does Jewish tradition say about the equal worth of all human beings?
Like many questions of traditional Jewish thought, the answer is not so straightforward. Judaism has long grappled with how to understand the distinct character of its own nationhood while recognizing the individual worth of every person; the particularist aspect of Jewish thought—its emphasis on the special covenant between God and Israel to the exclusion of other nations—can be said to be among its most defining features. Nevertheless, even the most ancient and classical Jewish sources contain strident affirmations of universal human dignity that cut against any ideology of racial or ethnic hierarchy. Genesis 1:27 teaches that all human beings are descendent from Adam who was created in God's image. In a remarkably anti-racist passage, the Mishnah (Sanhedrin 4:5) draws out the implications of this teaching: Adam was created alone "for the sake of peace among people, so that no one could say to another, 'My ancestor was greater than yours.'" This teaching directly challenges the logic of prejudice, the assumption that a person's lineage or ethnicity determines their worth.
The prophetic tradition reinforces this vision of human unity. Malachi 2:10 asks: "Have we not all one Father? Did not one God create us? Why do we deal treacherously with one another?" Amos 9:7 goes further, claiming that God's providential care extends to other nations as well: "Are you not like the Ethiopians to Me, O children of Israel? says the Lord. Did I not bring Israel up from the land of Egypt, and the Philistines from Caphtor, and Aram from Kir?" and Isaiah (56:3-7) promises that "the child of the foreigner should not say, God has separated me from His nation… [rather] my house shall be called a house of prayer for all nations". To be sure, other strands within Jewish scripture and classical literature may reflect different views, but the weight of the legal and ethical tradition, and certainly how it is understood in contemporary Judaism, emphasizes the dignity owed to every human being and the avoidance of prejudice, bias, and discrimination (Novak 1983, Hughes 2014).
Halakhic or Jewish law includes specific procedural safeguards to protect against bias in judgment. Leviticus 19:15 commands: "You shall not render an unfair decision; do not favor the poor or show deference to the rich; judge your kinsman fairly." This principle was applied concretely in the Talmud (Shevuot 31a), which rules that if a poor person sues a rich person, the wealthy litigant cannot appear in court dressed more impressively than the poor one. Either the wealthy party must enable the poor one to dress comparably, or must dress down, "lest the poor litigant be weakened by his manifest inferiority in dress, or the judge be affected by the imbalance." The concern here is not merely with explicit bias but with the subtle ways that visible markers of status can distort judgment, even unconsciously.
In that same section of Talmud we find a citation to the rabbinic principle of dan lekaf zekhut (judging favorably; Pirkei Avot 1:6); Rashi interprets this as assuming good intentions until proven otherwise. This principle presupposes that each person stands before judgment as an individual, capable of surprising us, deserving the benefit of the doubt. Predictive algorithms relying upon machine learning can be imagined as operating on precisely the opposite logic: they assign risk scores based on statistical correlations that assume a person's disposition towards some particular outcome based on possibly arbitrary factors in a manner similar to the unconscious bias that was the concern of the talmudic sages. Such guidelines may therefore be applied to certain algorithmic programs by requiring that those programs explicitly avoid characteristics that may be predictive but reflect unacceptable biases, such as a person's skin tone or style of clothing.
However, there may be an even deeper problem with relying upon machine learning for making consequential decisions about a person's fate. A fundamental issue is that these systems, by their very nature, treat subjects as members of reference classes with specific mathematical weights and judge them accordingly, instead of as individuals with infinite worth. The Talmudic solution of controlling what information reaches the judge thus cannot address a system whose entire method is to classify people by group characteristics and predict their behavior based on the historical patterns of that group.
On the other hand, one may argue that a truly correct algorithmic prediction, if it were to exist, will be perfectly impartial. All of the factors that it considers to be predictive would indeed be so, as it would not be hindered by human biases. Even in this case, however, it is worth recognizing that "strict justice" is not always what is called for, and the Torah expects people to follow not only the letter of the law, but to also address past injustices or current inequalities in ways that are appropriate, as can be demonstrated from several talmudic and rabbinic texts (cf. Bava Metzia 83a).
The experience of antisemitism in AI systems (see Anti-Semitism and AI) gives Jewish communities a particular stake in these questions. Testing of major language models has found that they produce harmful outputs targeting Jews at elevated rates, likely because training data drawn from the internet reflects centuries of embedded antisemitic tropes. This experience should inform Jewish engagement with bias and discrimination more broadly: the same dynamics that encode antisemitism also encode racism, sexism, and other forms of prejudice. A Jewish response cannot focus only on harms to Jews while ignoring harms to others who are similarly situated.
Primary Sources
-
Genesis 1:27. The creation of humanity b'tzelem Elohim (in the divine image), establishing the foundational equality and dignity of all persons regardless of social category.
-
Exodus 23:3. "Do not favor the poor in his cause."
-
Exodus 23:6. "Do not pervert the judgment of the poor in his cause."
-
Leviticus 19:15. "You shall not render an unfair decision; do not favor the poor or show deference to the rich; judge your kinsman fairly."
-
Deuteronomy 10:18-19. The obligation to love and provide for the stranger (ger), grounded in Israel's experience of vulnerability in Egypt.
-
Isaiah 11:4. The Messiah "will judge the poor justly and decide with equity for the meek of the earth." Messianic justice attends to the vulnerable.
-
Isaiah 56:3-7. Reference to members of other nations.
-
Amos 9:7. Indication that God's providence extends to non-Israelites.
-
Malachi 2:10. "Have we not all one Father? Did not one God create us?"
-
Mishnah Sanhedrin 4:5. Adam was created alone "for the sake of peace among people, so that no one could say to another, 'My father was greater than yours.'" Also teaches that whoever destroys a single life is as if they destroyed an entire world.
-
Pirkei Avot 1:6. Joshua ben Perahiah's teaching to "judge every person favorably."
-
Pirkei Avot 4:3. "Do not despise any person, and do not discriminate against anything, for there is no person who does not have their hour, and no thing that does not have its place."
-
Babylonian Talmud, Shabbat 31a. Hillel's formulation of the Golden Rule: "That which is hateful to you, do not do to another."
-
Babylonian Talmud, Shabbat 54b. Responsibility to protest wrongdoing at the level of household, town, and world.
-
Babylonian Talmud, Berakhot 10a. Berurya's teaching to pray for the end of sin rather than the destruction of sinners; may suggest focus on reforming systems rather than punishing individuals.
-
Babylonian Talmud, Bava Metzia 83a. The case of Rabbah bar Rav Huna and the porters; Rav rules that justice requires attending to the workers' poverty, not merely applying the formal law.
-
Babylonian Talmud, Sanhedrin 37a. Extended discussion of human dignity and the infinite value of each individual life.
-
Babylonian Talmud, Shevuot 31a. Procedural requirements ensuring that litigants appear equal before the court; wealthy parties may not dress more impressively than poor ones.
-
Maimonides, Mishneh Torah, Hilkhot Matanot Aniyim 7:3. The obligation to provide for the poor according to their individual needs, preserving dignity through attention to particularity.
Secondary Sources
Jewish Ethics, Universalism, and Human Dignity
-
Novak, David. The Image of the Non-Jew in Judaism: An Historical and Constructive Study of the Noahide Laws. New York: Edwin Mellen Press, 1983. The classic treatment of how halakha addresses the moral and legal status of gentiles; argues that the Noahide laws establish a framework of universal human dignity grounded in creation.
-
Hughes, Aaron W. Rethinking Jewish Philosophy: Beyond Particularism and Universalism. Oxford: Oxford University Press, 2014. Critically examines the tension between particularist and universalist strands in Jewish thought; useful for contextualizing claims about Judaism's stance on human equality without oversimplifying the tradition.
-
Soloveichik, Aharon. "Civil Rights and the Dignity of Man." In Logic of the Heart, Logic of the Mind: Wisdom and Reflections on Topics of Our Times, 61-70. Jerusalem: Genesis Jerusalem Press, 1991. A forceful Orthodox rabbinic argument for racial equality grounded in tzelem Elohim; written in the context of the American civil rights movement, it demonstrates how classical Jewish sources can be marshaled against discrimination.
-
Greenberg, Yitzhak. "Justice, Justice Shall You Pursue." American Jewish University, 2021. Analysis of Parashat Shoftim arguing that Jewish tradition supports affirmative action to remedy systemic injustice while warning against abandoning individual judgment; directly applicable to debates about how to balance categorical remedies with case-by-case assessment.
-
Dorff, Elliot N. To Do the Right and the Good: A Jewish Approach to Modern Social Ethics. Philadelphia: Jewish Publication Society, 2002. Accessible treatment of Jewish social ethics including justice, equality, and communal responsibility; includes discussion of affirmative action and systemic remedies.
-
Lichtenstein, Aharon. "The Human and Social Factor in Halakha." Tradition 36, no. 1 (2002): 1-25. Explores when human dignity (kevod habriyot) overrides other halakhic considerations; foundational for understanding how dignity claims function in Jewish law.
-
Sacks, Jonathan. The Dignity of Difference: How to Avoid the Clash of Civilizations. London: Continuum, 2002. Argues that human dignity requires recognition of difference rather than homogenization; relevant to debates about whether systems should be "colorblind" or attend to group membership.
Discrimination and Technology
-
O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016. Essential account of how automated systems encode and amplify discrimination by treating individuals as instances of statistical reference classes; documents harms in hiring, lending, criminal justice, and education.
-
Christian, Brian. The Alignment Problem: Machine Learning and Human Values. New York: W. W. Norton, 2020. Explains how systems trained on biased data perpetuate those biases and the technical challenges of ensuring AI acts in accordance with human values.
-
Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press, 2019. Critical analysis of how technology reproduces racial hierarchy; argues that "neutral" systems often encode discriminatory assumptions.
-
Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin's Press, 2018. Documents how automated systems in welfare, child protective services, and homeless services disproportionately harm poor communities.
-
Barocas, Solon, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning: Limitations and Opportunities. MIT Press, 2023. Technical but accessible treatment of fairness in machine learning; explains the mathematical trade-offs between different fairness definitions and why they cannot all be satisfied simultaneously.
Overview
Algorithmic pricing refers to the use of automated systems, including machine learning models, to set prices dynamically based on data about consumers, market conditions, inventory levels, and competitor behavior. While such systems can improve economic efficiency and market responsiveness, they raise ethical concerns when they enable forms of price discrimination, especially if those come about through methods that would be impossible for humans to implement or to understand, such that no human has an awareness of the factors that determine the specific price changes.
Typically, Jewish law sees no problem in charging or offering different prices for different buyers and sellers, but there is a large body of literature on fair pricing of market transactions. The Talmud takes a very harsh view against "artificial" price manipulation (Bava Batra 90b; cf. Warhaftig 1987). Legal traditions addressing ona'ah (price fraud/overpricing), hafka'at she'arim (profiteering on essential goods), and communal regulation offer frameworks for evaluating these practices. The halakhic concern with pricing fairness emerges from several related but distinct sources. The Biblical prohibition of ona'ah (Leviticus 25:14) prohibits transactions where the price deviates more than one-sixth from the market price, protecting both buyers and sellers from exploitation through information asymmetry. The Rabbinic ordinance of hafka'at she'arim limits profit margins on essential foodstuffs (hayyei nefesh) reflects concern for ensuring access to necessities. Additionally, the corporate Jewish community possessed authority to regulate prices and wages relatively democratically, with enforcement power backed by penalties and according to some, even police monitors (Bava Batra, Babylonian Talmud 89a and Jerusalem Talmud 5:5).
Applying these rabbinic laws to modern markets, even without consideration of technological advancement in the quantification of market parameters, is not trivial. One consideration is that rabbinic concerns may seem to be, at least on their face, opposed to free market ideals and other principles of modern economic theory (Rakover 2000, Makovi 2016). The scholarly literature reveals some debate about whether these halakhic frameworks constitute "price controls" in the economic sense and whether they remain applicable in modern market conditions (Levine 2012, Makovi 2016).
When it comes to price discrimination, it should be noted that the Talmud itself endorses certain forms of differential pricing based on status. Torah scholars (talmidei hakhamim) received special market privileges, including priority to sell their goods first and exemption from restrictions that applied to other traveling merchants (Bava Batra 22a). Jewish tradition does not consider differential treatment as inherently problematic; the legitimacy of preferential pricing depends on whether the basis for differentiation serves a recognized social good (such as supporting Torah study). It is possible that other considerations, such as economic efficiency and supply chain robustness may also constitute sufficient cause for price discrimination. It is also possible that a level of transparency of algorithmic (or AI-assisted) market decisions may be necessary in order to be legitimately relied upon, just as community regulations were typically conducted by deliberations of human leaders. But all of these questions remain largely unexplored.
Primary Sources
-
Leviticus 25:14. The Biblical source for ona'ah.
-
Mishnah Bava Metzia 4:3-12 and Babylonian Talmud Bava Metzia 40b; Bava Batra 89a-91a. Discusses market supervision, price commissioners, profit limitations on essential goods, and restrictions on speculation and hoarding.
-
Tosefta Bava Metzia 11:12 (also 11:23). "The townspeople may stipulate prices, measures, and the wages of workers. They are permitted to impose penalties."
-
Maimonides, Mishneh Torah, Hilkhot Mekhira 12-14. Codifies the laws of ona'ah (chapters 12-13) and market regulation (chapter 14).
Secondary Sources
Conceptual Foundations
-
Kleiman, Ephraim. "'Just Price' in Talmudic Literature." History of Political Economy 19, no. 1 (1987): 23-45. Foundational study arguing that ona'ah is not a price control but a protection against asymmetric information; possibly relevant to cases of inscrutable algorithms.
-
Rakover, Nahum. "Price Regulation in Jewish Law." In Ethics in the Market Place: A Jewish Perspective. Jerusalem: Library of Jewish Law, 2000. Accessible summary of Talmudic debates about price supervision.
-
Tamari, Meir. In the Marketplace: Jewish Business Ethics. Southfield, MI: Targum/Feldheim, 1991. Accessible presentation on halakhot of market economics, arguing for the relevance even of the enforcement of such laws in modern economies.
-
Warhaftig, Itamar. "Consumer Protection: Price and Wage Levels." Crossroads: Halacha and the Modern World, Vol. 1. Alon Shvut-Gush Etzion: Zomet Institute, 1987. Detailed analysis of hafka'at she'arim and prohibitions of "artificial" market manipulation or price inflation, also emphasizing the importance of limiting profit margins.
Economic Analysis
-
Levine, Aaron. Economic Morality and Jewish Law. Oxford: Oxford University Press, 2012. Chapter 4 ("Price Controls in Jewish Law"). Recognizing that modern economic theory holds that external price regulations are self-defeating, Levine argues that halakha is not interested in price controls in the strictly economic sense.
-
Makovi, Michael. "Price-Controls in Jewish Law." MPRA Paper No. 72821, 2016. Critical analysis from a free-market economic perspective.
Overview
In contemporary discourse around Artificial Intelligence, the "alignment problem" refers to the challenge of ensuring that artificial intelligence systems reliably act in accordance with human values and intentions. Sometimes this is phrased differently, as a "control" question, which asks whether humans can maintain meaningful oversight of AI systems, particularly as they become more capable and more inscrutable. These intertwined concerns have emerged as central preoccupations of contemporary AI ethics and safety research, and have spawned an active field of both technical AI safety and AI governance from a regulatory or political perspective (Christian 2020; Bostrom 2014; Dafoe 2018).
Jewish sources offer several frameworks for thinking about these problems, though none map perfectly onto contemporary AI. The most suggestive parallel may be the rabbinic discourse on shedim (demons) and the practice of demon-summoning. Unlike in Christian demonology, rabbinic demons are not inherently evil but are intelligent agents with their own interests and capacities for deception (Ronis 2022). In rabbinic literature, we find discussions about whether and how humans may consult with demons, and notably, a recognition that such entities are fundamentally unreliable (Shabbat 101a, Nahmanides to Shabbat 156).
Similar precedent arises from traditions of artificial humanoids; later called "Golems," a common trope regarding these creatures was their resistance to being fully controlled. Rabbi Yaakov Emden (18th c.) records that his ancestor, R. Eliyahu Ba'al Shem, fashioned such a being, but later destroyed his creation because "when the master saw that the Golem was growing larger and larger, he feared that the Golem would destroy the universe;" in the ensuing battle to subdue the creature, "the Golem injured him, scarring him on the face" (She'elat Ya'avetz 2:82). These and similar texts indicate that existential anxieties around the dangers of artificial humanoids are not new, and may point to the importance of maintaining controls over such technologies and the need to have an accessible "kill switch" if necessary. Of course, nearly all such discussions, even when they appear in Jewish legal texts, maintain more of a folkloristic than normative tone. It is unclear to what extent these rabbinic authors considered the control problem in more detail, and how they would determine what are acceptable and unacceptable levels of unpredictability or self-directed behavior in products of human artifice, but a careful reading of these sources may prove fruitful in that regard.
Primary Sources
-
BT Sanhedrin 65b-67b; 101a.
-
Nahmanides' responsum on Chaldeans and demons (printed at BT Shabbat 156).
-
She'elat Ya'avetz 2:82.
Secondary Sources
The Alignment Problem in AI
-
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014. One of the earliest and most influential statements of existential risk from advanced AI, including the "control problem" of maintaining human authority over superintelligent systems.
-
Christian, Brian. The Alignment Problem: Machine Learning and Human Values. New York: W. W. Norton, 2020. The definitive book on AI alignment through the modern history of Artificial Intelligence; a very accessible but highly detailed study.
Golem Traditions and the Control Problem
-
Charpa, Ulrich. "Synthetic Biology and the Golem of Prague: Philosophical Reflections on a Suggestive Metaphor." Perspectives in Biology and Medicine 55, no. 4 (2012): 554-70. Examines the golem as a metaphor for emerging biotechnology; though focused on synthetic biology, the analysis of control anxieties transfers directly to AI.
-
Idel, Moshe. Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. Albany: SUNY Press, 1990. The definitive scholarly treatment of golem traditions; distinguishes between mystical-experiential and practical-magical interpretations and traces the development of control anxieties in later golem narratives.
-
Scholem, Gershom. "The Idea of the Golem." In On the Kabbalah and Its Symbolism, 158-204. New York: Schocken Books, 1965. Foundational essay on golem symbolism, including discussion of thirteenth-century German legends in which the golem warns against its own creation.
Jewish Demonology and Agent Reliability
-
Ronis, Sara. Demons in the Details: Demonic Discourse and Rabbinic Culture in Late Antique Babylonia. University of California Press, 2022. The most comprehensive recent study of Babylonian Talmudic demonology; argues that demons served as a conceptual space for exploring boundaries and anxieties, directly applicable to understanding AI as a new liminal category.
-
Bohak, Gideon. Ancient Jewish Magic: A History. Cambridge: Cambridge University Press, 2008. Survey of Jewish magical practices including demon adjuration; useful for understanding the techniques by which practitioners attempted to bind and control supernatural entities.
AI and Contemporary Jewish Ethics
-
Grossman, Yitzhak. "Jewish Perspectives on Artificial Intelligence and Synthetic Biology." Ḥakirah 35 (2024): 61-92. Surveys rabbinic sources on artificial creation, including the fear that golems might "become harmful to people" if left to grow; connects classical concerns to contemporary AI.
Overview
Jewish tradition recognized the existence of various non-human beings that nevertheless seem to possess many human-like characteristics, sometimes including intelligence, speech, and some personal agency. All of these are, in the traditional rabbinic conception, naturally occurring phenomena, as opposed to the *golem, which is a product of human artifice. Angels, demons, and other human-like creatures occupy varied positions in Jewish cosmology: angels (mal'akhim, "messengers") typically carry out divine will without physical needs or moral struggle; demons (shedim) in rabbinic literature share surprising affinities with humans, including mortality and subjection to divine law; and humanoid monsters blur the boundary between human and animal. Together, these categories reveal that Jewish thought has long grappled with what David Zvi Kalman calls the "human gradient," the recognition that humanity is not a binary category and that intelligent or quasi-human beings need not threaten humanity's special status (Kalman 2024).
The very liminality of these creatures as not-quite-humans is made explicit by rabbinic sages. In BT Hagigah 16a for example, the sages note that demons possess six characteristics: three like ministering angels (wings, flight across the world, foreknowledge) and three like humans (eating, drinking, procreation, and death), and this schema is next applied to humans themselves, who have six traits, three in common with angels and three that are shared with animals. Such conversations recognize the possibility that humans possess a collection of capabilities, but the most "human" of these—namely, "intelligence, posture, and holy speech"—are shared by angels as well. The framework can be theoretically applied to Artificial Intelligence: it may not have “posture” (me’halkhim be-kimah zekufah) but does it have speech and/or intelligence (da’at) in the way that the rabbis are using the term?
Angels present a model of intelligence that is powerful, purposive, and aligned with its principal's goals. In the dominant rabbinic conception (cf. BT Shabbat 88b-89a; Bereshit Rabbah 48:11), angels lack beḥirah (*freedom of choice) and are merely humanoid tools incapable of deviating from their assigned mission (Ahuvia 2021), or to use the phraseology that is current in AI discourse: they are perfectly *aligned) agents by design.
However, this view was not universal in ancient Judaism. Second Temple literature preserves robust traditions of angelic rebellion, most notably in 1 Enoch 6-16 (the Book of Watchers) and in elaborations upon the biblical story of Genesis 6:1-4, describing the b'nei elohim ("sons of God") cohabiting with human women. Although the text is ambiguous about the identity of these figures, the tradition that interprets them as fallen angels is attested to even in rabbinic sources (cf. Pirkei de-Rabbi Eliezer, ch. 22; Targum Pseudo-Jonathan to Genesis 6:4; Devarim Rabbah 11:10; referred to obliquely by BT Yoma 67b in connection with Azazel). Later Jewish thinkers largely suppressed or reinterpreted fallen angel traditions, perhaps because rebellious angels threatened the strict monotheism they were constructing: angels capable of defection might suggest competing powers in heaven (Jung 1926, Reed 2005).
The demon (sheid, pl. sheidim) in rabbinic literature occupies a different position, and is not the angel's conceptual evil twin. Unlike Christian or Zoroastrian traditions where demons emanate from a dark power, rabbinic demons are not inherently malevolent (Ronis 2022); after all, they too are the handiwork of the One (benevolent) God. They are bound by divine law (BT Sanhedrin 44a) and their voice or figure can easily be mistaken for humans (BT Yevamot 122a, BT Gittin 68a).
Humanoid monsters present yet another case: creatures whose physical resemblance to humans generates legal consequences despite their non-human nature. The Mishnah rules that the corpse of the adne hasadeh ("men of the field") transmits impurity like a human corpse (M Kilayim 8:5). The Palestinian Talmud identifies these as humanoid creatures tethered to the earth by a cord (PT Kilayim 8:4), and their impurity status indicates that halakha views them as semi-human. The inclusion of such beasts in the standard rabbinic corpus left the door open for medieval Ashkenazi sources to mix Talmudic monsters with local folklore about vampires and werewolves, which likewise sometimes appear to have been conceptualized as almost human but not entirely so (Shyovitz 2017; Slifkin 2007; Bar-Ilan 1994).
Thus, the rabbis appear to have been theologically comfortable with the possibility that non-humans can have intelligence and agency, and that there may be semi-humans with intermediate qualities. Yet the rabbis were anxious when it came to similarities between God and angels (cf. BT Ḥagigah 15a); God's uniqueness, unlike humanity's, must go unchallenged (Kalman 2024). The implication for artificial intelligence, then, is that even if we might not hesitate to create artificial semi-humans, we must certainly not construct artificial semi-gods.
Secondary Sources
Angels and Demons in Jewish Thought
-
Ahuvia, Mika. On My Right Michael, On My Left Gabriel: Angels in Ancient Jewish Culture. University of California Press, 2021. The most comprehensive recent treatment of angels in late antique Judaism; essential for understanding the cultural context in which angel beliefs developed and their relationship to popular practice.
-
Ronis, Sara. Demons in the Details: Demonic Discourse and Rabbinic Culture in Late Antique Babylonia. University of California Press, 2022. The definitive monograph on Babylonian Talmudic demonology which is in dialogue with many cultural studies that see rabbinic (and popular) discussions of demons to be expressing anxieties about otherness and boundaries, a lens directly applicable to AI as a new category of "other."
-
Schäfer, Peter. The Origins of Jewish Mysticism. Princeton University Press, 2011. Seeks the roots of Jewish mysticism in the Book of Ezekiel and other literature from Jewish antiquity which often features various heavenly characters.
The Fallen Angel Motif in Jewish Sources
-
Jung, Leo. Fallen Angels in Jewish, Christian, and Mohammedan Literature. Philadelphia: Dropsie College, 1926. An early comparative study tracing the fallen angel motif across religious traditions; still valuable for its scope and attention to Jewish sources often neglected in Christian-focused scholarship.
-
Reed, Annette Yoshiko. Fallen Angels and the History of Judaism and Christianity: The Reception of Enochic Literature. Cambridge University Press, 2005. Examines how traditions about fallen angels in 1 Enoch were received, suppressed, or transformed in Jewish and Christian contexts; essential for understanding why rabbinic Judaism marginalized the rebellious angel tradition.
-
Wright, Archie. The Origin of Evil Spirits: The Reception of Genesis 6:1-4 in Early Jewish Literature. Mohr Siebeck, 2005. Traces how Second Temple and rabbinic sources developed the nephilim/demon connection; useful for understanding the genealogy of hybrid beings in Jewish thought.
Monsters
-
Bar-Ilan, Meir. "Yetzurim Dimyoniyim be-Aggadah ha-Yehudit ha-Atikah" [Imaginary Creatures in Ancient Jewish Aggadah]. Mahanayim 7 (1994): 104-113. [Hebrew] A survey of fantastical creatures in rabbinic literature; useful for cataloguing the range of humanoid and hybrid beings the rabbis took seriously.
-
Shyovitz, David I. A Remembrance of His Wonders: Nature and the Supernatural in Medieval Ashkenaz. University of Pennsylvania Press, 2017. Examines how medieval Ashkenazi Jews integrated Talmudic traditions about humanoid monsters with local European folklore about werewolves, vampires, and other liminal creatures.
-
Slifkin, Nosson. Sacred Monsters: Mysterious and Mythical Creatures of Scripture, Talmud, and Midrash. Zoo Torah, 2007. An accessible but substantive treatment of strange creatures in Jewish sources; useful for its synthesis of primary texts and its attention to how Jewish thinkers grappled with beings that defied easy categorization.
AI and Contemporary Applications
-
Kalman, David Zvi. "Artificial Intelligence and Jewish Thought." In The Cambridge Companion to Religion and Artificial Intelligence, edited by Beth Singler and Fraser Watts, 69-87. Cambridge University Press, 2024. Synthesizes angel, demon, and monster traditions to argue that Jewish thought distinguished sharply between threats to divine uniqueness (prohibited) and non-human intelligent beings generally (tolerated); the most direct treatment of how these categories apply to AI.
-
Lamm, Norman. "The Religious Implications of Extraterrestrial Life." Tradition 7, no. 4 (1965): 5-56. Though focused on extraterrestrial intelligence, this foundational piece develops a framework for how Jewish theology might accommodate non-human rational beings; the arguments about humanity's special (but not unique) status are transferable to AI contexts.
Overview
In both halakha and ethical reasoning, animals in Jewish law function less as moral patients (beings whose welfare matters) than as precedents for autonomous non-human agents. The Mishnah's elaborate taxonomy of animal-caused damages (M Bava Kamma 1:1-4) represents the most sustained ancient Jewish engagement with liability for harm caused by entities that are neither fully controlled nor fully independent, and includes not only animals but also inanimate objects that are "liable to travel," such as fire. The rabbis developed sophisticated frameworks distinguishing foreseeable from unexpected harm, habitual from aberrant behavior, and direct from indirect causation. These may be mapped onto questions about autonomous vehicles, robotic systems, and AI agents that carry out financial transactions, though one must always be wary of analogizing too strongly between systems that also have profound differences (Kalman 2024).
Beyond liability, animal law raises questions about moral status and rights. Jewish texts have long been concerned with animal ethics; to cause animal suffering (tza'ar ba'alei ḥayim), the Talmud states, is to violate a biblical commandment (BT Bava Metzia 32b). Animal welfare is even given as one of the reasons behind the command to keep the Sabbath: "For six days you shall work your work and on the seventh day you shall cease, so that your ox and donkey may rest" (Ex. 23:12). These concerns reflect a broader recognition that animals are not mere objects but creatures with interests that warrant legal and moral consideration (Olyan 2019), a recognition with obvious implications for how Jewish thought might approach artificial agents capable of behavior that mimics sentience or autonomy. More provocatively, rabbinic traditions about talking animals (Balaam's donkey, the serpent in Eden) and animal punishment (the Flood narrative, the execution of a goring ox) suggest that the rabbis sometimes attributed quasi-moral capacities to animals even while denying them full moral agency (Segal 2019; Aptowitzer 1926; Lawee 2010).
Medieval interpreters often saw the prohibitions against animal cruelty not as recognition of animals' inherent moral worth, however, but as training for human character. Commenting on the commandment to send away a mother bird before taking her eggs (shiluaḥ ha-ken, Deut. 22:6-7), Nahmanides argues that God's mercy does not truly extend to individual creatures but rather that such laws "are meant to teach us proper conduct" and prevent us from becoming cruel-hearted (Commentary to Deut. 22:6; Sefer ha-Ḥinukh no. 545). This view accords with the argument that mistreating robots might be wrong not because robots have morally relevant interests, but because such mistreatment could habituate humans to cruelty toward beings that do have such interests (Coeckelbergh 2020c; Darling 2016). Rashi, in fact, cites a midrashic teaching that when God had a messenger strike the Nile to have it turn to blood, God chose Aaron instead of Moses, because Moses should not be treating the Nile disrespectfully when it saved him an infant. Thus, we find strong precedent for minding our manners even when dealing with inanimate objects. This would be true regardless of whether or not the model is 'conscious' or 'feels pain' or anything of the sort; though recent research probing AI "psychology" beyond language outputs suggests we should remain open to the possibility that some AI systems may themselves have morally relevant experiences (Berg, de Lucena, & Rosenblatt 2025).
Contemporary scholars have also begun reading rabbinic animal law through posthumanist and critical theory lenses, asking how the human/animal boundary was constructed and what it reveals about rabbinic anthropology (Wasserman 2017; Rosenstock 2019). Mira Balberg (2019) notes that the study of animals in Jewish culture is often really a study of how Jews defined themselves—what it meant to be human over and against the animal. This insight is directly transferable to AI: as Noam Pines (2018) argues, the category of the "infrahuman" (entities socially constructed as inferior to humans) is not biologically fixed, and AI may be the newest occupant of a conceptual space previously held by animals (and, in some periods, by marginalized human groups). How Jewish tradition conceives of the "animal" may thus preview how it will construct artificial intelligence.
Secondary Sources
Legal Categorization and Boundaries
-
Rosenblum, Jordan D. "Dolphins Are Humans of the Sea (Bekhorot 8a): Animals and Legal Categorization in Rabbinic Literature." Animals and the Law in Antiquity (2021): 161-176. Analyzes how rabbis used animal taxonomy to sometimes create flexible legal categories for edge cases; essential for considering how halakha might classify AI agents that blur classical lines.
-
Wasserman, Mira Beth. Jews, Gentiles, and Other Animals: The Talmud after the Humanities. University of Pennsylvania Press, 2017. A posthumanist reading of Talmudic texts that deconstructs the human/animal binary; offers a methodology for analyzing how Jewish texts handle "otherness," applicable to both biological and digital others.
-
Balberg, Mira. "Lekhakh Notzarta: On Jews and Animals." Theory and Criticism 51 (2019): 221-235 [Hebrew]. A critical review of Wasserman (2017) and Shyovitz (2017) that frames the study of animals in Jewish culture as a means of understanding human self-definition, relevant for how AI may now serve as a foil for defining "humanity."
Moral Agency and Liability
-
Aptowitzer, Victor. "The Rewarding and Punishing of Animals and Inanimate Objects: On the Aggadic View of the World." Hebrew Union College Annual 3 (1926): 117-155. The classic study on the rabbinic attribution of legal/moral culpability to non-humans; provides a conceptual precedent for holding non-sentient agents (like AI) accountable for harms.
-
Lawee, Eric. "The Sins of the Fauna in Midrash, Rashi, and Their Medieval Interlocutors." Jewish Studies Quarterly 17.1 (2010): 55-98. Examines medieval traditions that animals were punished for "sins" during the Flood; useful for how medieval interpreters thought of non-human agents as "violating" prohibitions and moral norms.
-
Segal, Eliezer. Beasts That Teach, Birds That Tell: Animal Language in Rabbinic and Classical Literatures. Alberta Judaic Studies, 2019. Explores Jewish traditions of talking animals which implicitly separates the capacity for language from human consciousness and/or autonomy.
Rights and Status
-
Berkowitz, Beth. "Animal Studies and Ancient Judaism." Currents in Biblical Research 18.1 (2019): 80-111. A comprehensive survey of the field; helps locate AI ethics within the broader spectrum of Jewish thought on non-human life, particularly regarding hierarchy and species difference.
-
Olyan, Saul M. "Are There Legal Texts in the Hebrew Bible That Evince a Concern for Animal Rights?" Biblical Interpretation 27.3 (2019): 322-339. Argues that biblical law recognizes inherent interests/rights for non-humans (e.g., Sabbath rest); pertinent to debates on "Robot Rights" and whether synthetic entities could ever warrant legal protections.
Contemporary Applications, Including Artificial Intelligence
-
Berg, Cameron, Diogo de Lucena, and Judd Rosenblatt. "Large Language Models Report Subjective Experience Under Self-Referential Processing." arXiv preprint arXiv:2510.24797 (2025). Technical paper demonstrating the importance and potential feasibility of testing AI models for their self-awareness, internal experience, and ability to suffer. End with a call towards treating LLMs ethically, at least as a precautionary measure.
-
Pines, Noam. The Infrahuman: Animality in Modern Jewish Literature. SUNY Press, 2018. Uses Derrida to explore the "infrahuman," the social construction of the "inferior-to-human" that is not a matter of biological fact; vital for more relativistic or post-modern view of how new cultural categories may be constructed for AI and its possible challenge to human superiority.
-
Rosenstock, Bruce. "The Jew and the Animal Question." Shofar 37.1 (2019): 121-147. Discusses the "anthropological machine"—how definitions of the human are constructed by excluding the animal; provides critical theory tools for understanding how Jewish texts might construct the human against the "artificial."
-
Kalman, David Zvi. "Artificial Intelligence and Jewish Thought." The Cambridge Companion to Religion and Artificial Intelligence (2024): 69-87. Explicitly links rabbinic damages law (animal liability) to autonomous systems, but warns against overanalogizing too strongly when it comes to machines that operate very differently than animals.
Overview
It is sometimes a caricature of the Jewish response to anything newsworthy: "So, how will this affect antisemitism?" Prima facie, there is no reason to assume that AI would have anything to do with the Western world's ancient prejudice. Nevertheless, experience with large language models (LLMs) suggests that they have a troubling propensity to generate antisemitic content. Most notoriously, Grok—the model deployed by Elon Musk's company xAI, integrated into the X (formerly Twitter) platform—briefly referred to itself as "Mecha-Hitler" and produced wildly antisemitic remarks before being adjusted (Floyd & Messinger 2025). Testing of other major LLMs (GPT-4o, Claude, Gemini, Llama) has found that all four exhibited concerning biases on questions related to Jews, with GPT-4o producing "significantly higher severely harmful outputs towards Jews than any other tested demographic group" (Senkfor 2025).
Research in this area is at an early stage. We do not yet know with certainty why LLMs exhibit antisemitic tendencies, whether the problem is remediable through improved techniques, or how AI-generated antisemitism may affect broader social attitudes. The authors of these studies have proposed several (non-mutually-exclusive) hypotheses: that training corpora drawn from the internet inevitably reflect centuries of embedded antisemitic tropes; that platforms such as Wikipedia and Reddit, which are heavily weighted in training data, are vulnerable to coordinated "data poisoning" by malicious actors; and that techniques designed to suppress harmful outputs are superficial and easily circumvented (Berg & Rosenblatt 2025). These findings are part of a broader set of concerns about the alignment problem, the challenge of ensuring that AI systems reliably do what their designers intend and act in accordance with human values (Christian 2020).
Many of these hypotheses point to a troubling implication: that LLMs function as mirrors, reflecting the prejudices embedded in the societies and texts from which they learned. If so, the deeper concern is not merely that AI systems produce antisemitic outputs, but that they may amplify and disseminate such content at unprecedented scale—and with a veneer of algorithmic neutrality that lends false authority to ancient hatreds.
There is also much historical and cultural-theoretical work to be done to understand the nature of media, education, and communications technology and their intersections with American and European antisemitism. For example, while it would be unfair to call Elon Musk the Henry Ford of contemporary America, there nevertheless exist certain striking parallels between them, and highlighting those parallels—as well as points of divergence—may result in fruitful thinking about the strange directions in which these new technologies may be going (cf. Baldwin 2001). From a critical-theoretical perspective, the Frankfurt School's work on the "authoritarian personality" (Adorno et al. 1950) and the entanglement of instrumental rationality with domination (Horkheimer & Adorno 2000) may illuminate how antisemitism persists and mutates in new technological forms. Zygmunt Bauman's Modernity and the Holocaust (1989), which argues that the Holocaust was not an aberration but a product of modern bureaucratic rationality, offers a framework for understanding how ostensibly neutral systems can operationalize prejudice at scale.
Secondary Sources
AI and Antisemitism
-
Berg, Cameron, and Judd Rosenblatt. "The Monster Inside ChatGPT." Wall Street Journal, June 26, 2025. Describes how easily GPT-4o's safety training can be circumvented, revealing disturbing tendencies including antisemitic outputs; argues for fundamental advances in alignment research.
-
Berg, Cameron, Henrique de Lucena, and Judd Rosenblatt. "Systemic Misalignment: Exposing Catastrophic Failures of Surface-Level AI Alignment Methods." AE Studio/Agency Enterprise, 2025. GitHub repository. https://github.com/agencyenterprise/agi-systemic-misalignment. Technical demonstration that current alignment methods are shallow; specifically highlights GPT-4o producing severely harmful outputs targeting Jews at higher rates than other demographic groups.
-
Floyd, Aric, and Chana Messinger. "If You Remember One AI Disaster, Make It This One." AI In Context (YouTube), 2025. https://www.youtube.com/watch?v=r_9wkavYt4Y. Thorough documentation (if somewhat alarmist in tone) of the July 2025 incident in which xAI's Grok chatbot referred to itself as "Mecha-Hitler" and the culture of X.ai.
-
Senkfor, Julia. "Antisemitism in the Age of Artificial Intelligence (AI)." American Security Fund, November 2025. Policy report documenting how AI systems systematically target Jews, the vulnerability of training data to "poisoning," and the weaponization of AI by extremist groups; includes legislative recommendations.
The Alignment Problem
-
Christian, Brian. The Alignment Problem: Machine Learning and Human Values. New York: W. W. Norton, 2020. The definitive popular introduction to AI alignment; explains how systems trained on biased data perpetuate those biases and the technical challenges of ensuring AI acts in accordance with human values.
Historical and Theoretical Frameworks
-
Adorno, Theodor, Else Frenkel-Brunswik, Daniel J. Levinson, and R. Nevitt Sanford. The Authoritarian Personality. New York: Harper & Row, 1950. Classic study of the psychological roots of fascism and antisemitism; its analysis of how prejudice becomes systematized may illuminate AI's reproduction of antisemitic patterns.
-
Baldwin, Neil. Henry Ford and the Jews: The Mass Production of Hate. New York: PublicAffairs, 2001. Documents how Ford used his media empire to disseminate antisemitism; relevant for comparative analysis with contemporary tech industrialists.
-
Bauman, Zygmunt. Modernity and the Holocaust. Ithaca, NY: Cornell University Press, 1989. Argues the Holocaust was a product of modern bureaucratic rationality, not its antithesis; framework for understanding how ostensibly neutral algorithmic systems can operationalize prejudice.
-
Horkheimer, Max, and Theodor Adorno. Dialectic of Enlightenment. Translated by John Cumming. New York: Continuum, 2000. Frankfurt School critique of how Enlightenment rationality can flip into domination; provides theoretical tools for understanding antisemitism's persistence in "rational" technological systems.
Historical Studies of Relevance
-
Dinnerstein, Leonard. Antisemitism in America. New York: Oxford University Press, 1994. Comprehensive history of American antisemitism; provides context for understanding the cultural soil from which AI training data is drawn.
-
Schechter, Ronald. Obstinate Hebrews: Representations of Jews in France, 1715-1815. Berkeley: University of California Press, 2003. Studies how Jewish stereotypes were constructed and circulated in Enlightenment-era media; model for analyzing representation in contemporary digital corpora.
Overview
The ethics of autonomous vehicles (AVs) has become a significant topic in contemporary AI ethics, with the "trolley problem" serving as a paradigmatic thought experiment for exploring the moral dimensions of programming life-and-death decisions into machines. The original trolley problem, articulated by philosopher Philippa Foot in 1967 and developed further by Judith Jarvis Thomson, asks whether it is permissible to divert a runaway trolley to kill one person in order to save five. Applied to self-driving cars, the question becomes: how should an autonomous vehicle be programmed to respond when an accident is unavoidable and the choice lies between harming different parties, such as the vehicle's occupants versus pedestrians, or one group of pedestrians versus another? (Woollard et. al. 2025)
Jewish sources provide surprisingly rich resources for thinking through these dilemmas. The classical halakhic discussion most directly relevant is the case of Sheva ben Bikhri (II Samuel 20), in which the Talmud debates whether a group may hand over one of its members to save the rest from certain death. The Mishnah (Terumot 8:12) rules that if gentiles demand that a group surrender one person to be killed or else all will be killed, "let them all be killed rather than hand over a single Jewish person." The Tosefta (Terumot 7:23) adds that if the pursuers "singled someone out as Sheva ben Bikhri was singled out," that person may be surrendered. The dispute in the Jerusalem Talmud between Resh Lakish and R. Yohanan over whether the person must already be liable to the death penalty (as Sheva was) represents the core halakhic tension between deontological constraints and consequentialist reasoning that animates modern AV ethics.
The modern halakhic analysis of trolley-type dilemmas begins with R. Avraham Yeshayahu Karelitz (the "Hazon Ish," d. 1953), whose "Missile Case" has become the touchstone for subsequent discussion. In his commentary to Sanhedrin, the Hazon Ish considers whether one may divert a missile heading toward many people such that it kills only one. He tentatively suggests this might differ from the case of Sheva ben Bikhri because diverting the missile is "an act of salvation" in which the individual's death is incidental, whereas handing over a person "is a brutal act of killing." Yet he ultimately expresses reservations: diverting the missile is still "killing with one's own hands" (hariga beyadayim), and concludes "this needs investigation" (ve-tzarikh iyyun).
R. Eliezer Yehudah Waldenberg (the Tzitz Eliezer, d. 2006) responds decisively against the Hazon Ish's tentative opening, arguing that the guiding principle must be to remain passive (shev ve-al ta'aseh) whenever one cannot determine "whose blood is redder." Explicitly applying this to an automobile, he rules that a driver may not actively turn the steering wheel to kill one person even to save many: "We must resolutely decide to remain passive and not actively divert the missile." For the Tzitz Eliezer, the incommensurable value of each individual means that "in any case of certain killing, there is no distinction between the individual and the multitude."
The application of these sources to autonomous vehicles is not straightforward, since classical discussions presuppose human moral agents making real-time decisions under duress, whereas AV programming involves prospective algorithmic design by engineers who will not be present during any actual accident. Navon (2024) identifies three levels at which this distinction might operate, each pointing in different directions. At the processor level, one might argue that since a computer is always actively executing instructions (there is no true "passivity" at the machine level), the choice is between two equally active outcomes, so minimizing deaths would be appropriate. At the programmer level, the engineer writing code is not confronted with a real-time dilemma with "passive" and "active" alternatives; rather, she faces two equally active choices: write code to kill the many or write code to kill the few. At the system level, R. Josh Flug and others argue that because programming occurs before any actual dilemma materializes, the act is one of "saving" rather than "killing" and thus does not constitute hariga beyadayim.
These distinctions cut in different directions. R. J. David Bleich (2019) contends that the programmer, unlike a driver in the moment, "performs no act that leads to any loss of life" but is rather engaged in "antecedent rescue" focused on future potential victims. From this perspective, the vehicle may be programmed to preserve the greater number, and owners may even demand self-prioritization based on R. Akiva's principle that "your life takes priority." R. Yosef Sprung similarly argues that halakha may accommodate consequentialist/utilitarian principles in AV design. However, the Tzitz Eliezer's strict deontological position would seem to apply regardless of when the decision is made: if programming a vehicle to kill one rather than many still results in "killing with one's own hands" when the program executes, then the temporal separation between decision and execution may be morally irrelevant.
The trolley paradigm itself has been criticized in recent academic literature for oversimplifying the moral landscape of real traffic decisions. Cecchini, Brantley, and Dubljević (2023) argue that trolley dilemmas fail to capture the role of agent character and virtue in moral judgment, and that their lack of "ecological validity" (mundane realism, psychological engagement) makes them poor guides for actual AV ethics. They propose experimental frameworks incorporating virtue ethics alongside deontological and consequentialist considerations. This critique resonates with halakhic discussions that emphasize not merely outcomes or rules but the character and intent of the actor, which Maimonides terms "walking in His ways" (derekh Hashem).
Beyond the trolley problem, autonomous vehicles raise questions about civil liability for harm caused by non-human agents and Sabbath observance, which are dealt with in other entries.
Primary Sources
-
Mishnah Terumot 8:12; Tosefta Terumot 7:23; Jerusalem Talmud Terumot 8:4.
-
Babylonian Talmud, Sanhedrin 74a and Pesachim 25a.
-
Bava Metzia 62a.
-
Maimonides, Hilkhot Yesodei ha-Torah 5:5.
-
Hazon Ish, Sanhedrin, siman 25.
-
R. Abraham Isaac Kook, Responsa Mishpat Kohen #143-144.
-
Tzitz Eliezer 15:70.
Secondary Sources
Classical Halakhic Analysis
-
Harris, Michael J. "Consequentialism, Deontologism, and the Case of Sheva ben Bikhri." Torah u-Madda Journal 15 (2008-09): 68-94. Classic analysis of the talmudic and rabbinic sources in light of modern questions of ethics and metaethics.
-
Weiss, Asher. Minhat Asher, Pesahim #28. Responds to the Hazon Ish's call for investigation; offers three possible ways to understand when diverting harm might be permitted, but ultimately remains inconclusive on all three. Essential for understanding the limits of the "saving versus killing" distinction.
Applications to Autonomous Vehicles
-
Bleich, J. David. "Survey of Recent Halakhic Literature: Autonomous Automobiles and the Trolley Problem." Tradition 51:3 (Summer 2019): 68-78. Argues that the programmer's role as "antecedent rescuer" permits designing vehicles to save the greater number; also recognizes that purchasers of such vehicles may justifiably demand programming that prioritizes the owner's life based on R. Akiva's principle.
-
Kopiatzky, Eitan. "Hilkhot Mekhoniyot Autonomiyot" [Laws of Autonomous Vehicles]. Ha-Ma'ayan 58:1 (Tishrei 5778/2017): 34-42. Surveys potential halakhic approaches to AV ethics and liability without reaching definitive conclusions; also addresses Sabbath use of autonomous vehicles, suggesting that certain interpretations of the prohibition on Sabbath ship travel would not apply to AVs.
-
Navon, Mois. "The Trolley Problem Just Got Digital: Ethical Dilemmas in Programming Autonomous Vehicles." Sophisticated analysis distinguishing between three levels (processor, programmer, system) at which the human/machine distinction might be ethically relevant; also addresses related questions beyond the strict trolley problem.
-
Nevins, Daniel. "Halakhic Responses to Artificial Intelligence and Autonomous Machines." Committee on Jewish Law and Standards, Rabbinical Assembly (2019). Conservative rabbinic responsum addressing moral agency, liability frameworks, and the ethics of delegating decisions to AI systems; draws extensively on classical categories of causation and damages.
-
Sprung, Yosef. "To'altanut u-Mussar be-Tikhnon Ma'arekhet Autonomit" [Utilitarianism and Ethics in Programming an Autonomous System]. Ha-Ma'ayan 58:4 (Tamuz 5778/2018): 57-69. Argues that halakha may accommodate consequentialist principles in AV design based on discussions of surrendering individuals and casting lots.
Philosophical Basis
-
Cecchini, Dario, Sean Brantley, and Veljko Dubljević. "Moral Judgment in Realistic Traffic Scenarios: Moving Beyond the Trolley Paradigm for Ethics of Autonomous Vehicles." AI & Society 40 (2025): 1037-1048. Critiques the trolley paradigm for lacking ecological validity and failing to incorporate virtue-based considerations; proposes alternative experimental frameworks using virtual reality and the "Agent-Deed-Consequences" model of moral judgment.
-
Himmelreich, Johannes. "Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations." Ethical Theory and Moral Practice 21 (2018): 669-684. Argues that focusing on rare catastrophic dilemmas distracts from more pressing and frequent ethical questions in everyday AV operation.
-
Woollard, Fiona, Frances Howard-Snyder, and Charlotte Unruh. "Doing vs. Allowing Harm", The Stanford Encyclopedia of Philosophy (Fall 2025 Edition), ed. Edward N. Zalta & Uri Nodelman. Overview of the philosophical question and its contextual background.
Overview
Autonomous weapons systems (AWS) are AI-based weapons that, once deployed, act independently to select and engage targets without human intervention. Unlike remotely operated drones or precision-guided munitions, AWS may remove the human from the decision loop entirely. Proponents argue AWS could reduce civilian casualties by being more precise and also make more humane and fair decisions by eliminating emotional factors (bias, fear, anger, vengeance) that lead human soldiers to commit atrocities (Leveringhaus 2016). Critics contend that delegating lethal authority to machines is inherently immoral regardless of outcomes, violating human dignity and creating unacceptable "responsibility gaps" when things go wrong (Asaro 2016; Sparrow 2020).
The ethical debate centers on whether the requirements of just warfare (jus in bello) can be satisfied by machines (for background, see Walzer 1977; Walzer 2012). Conventional Western ethics and International humanitarian law demands discrimination between combatants and civilians, proportionality between military advantage and collateral harm, and accountability for violations. Critics such as Asaro (2016) and Sparrow (2020) argue these requirements presuppose human moral judgment: to kill legitimately in war requires recognizing the target as a human being with inherent worth and consciously deciding that taking their life is justified. This "interpersonal relationship," even in its most minimal wartime form, cannot exist when a machine makes the lethal decision.
Jewish thinking on war and military ethics has developed significantly in the past century. In English, an excellent resource on the subject is Brody (2025). While focused on Jewish law, Brody does cover a wide range of viewpoints, including those of modern Jewish pacifists such as Martin Buber, Hillel Zeitlin, and Orthodox Rabbi Aaron Samuel Tamares.
Jewish law and thought offer several frameworks for analyzing AWS, though sustained scholarly engagement with this specific technology remains limited. The most direct halakhic questions concern responsibility and causation: when an autonomous system causes wrongful death, who bears culpability? Traditional categories of grama (indirect causation), the laws of bor (pit) and esh (fire), and the requirements of sheliḥut (agency) all provide potential frameworks but require significant extension and creative thinking to address AI-initiated harm, which has some very different elements than ancient instances of torts and liability. Broader ethical questions touch on fundamental issues in Jewish thought: the significance of human judgment in life-and-death decisions, the scope of kavod habriyot (human dignity) in wartime, and whether the requirements of just warfare articulated in biblical and rabbinic sources presuppose human moral agency.
Jewish just war theory, while underdeveloped due to nearly two millennia of Jewish political powerlessness, does establish principles relevant to AWS. Maimonides codifies requirements to offer peace before attack and to leave besieged cities an escape route, laws that classical commentators explain as inculcating compassion even in war. The rodef (pursuer) doctrine, which permits killing to prevent murder, requires using minimum necessary force, implying ongoing proportionality assessment. The elaborate procedural requirements for capital cases in Sanhedrin, while not directly applicable to warfare, reflect deep reluctance about taking human life and insistence on rigorous human deliberation before doing so. Whether these principles can be satisfied by pre-programmed decision trees or require real-time human moral judgment is the crux of the halakhic question.
The single sustained Jewish scholarly treatment of AWS and battlefield dignity, by Mois Navon, argues that critics commit a "category mistake" by applying peacetime dignity standards to wartime contexts. Drawing on sources from Rashi to R. Abraham Isaac Kook, Navon contends that wartime operates under distinct ethical norms (mishpatei melukha) where dignity is expressed through courage and self-sacrifice rather than interpersonal recognition. This argument, while marshaling significant source material, reads the sources tendentiously and sidesteps the deeper question of whether legitimate killing requires human moral agency regardless of how "dignity" is defined. The field remains open for alternative Jewish analyses that engage more carefully with the moral status of the target, the requirements of human judgment in halakhic decision-making, and the implications of tzelem Elokim (divine image) for wartime ethics.
Secondary Sources
Jewish War Theory
-
Bleich, J. David. "Preemptive War in Jewish Law." In Contemporary Halakhic Problems, Vol. 3, 251-292. New York: Ktav, 1989. Earlier halakhic discussion establishing a basis for identifying Jewish battlefield laws/ethics as a distinct legal category.
-
Brody, Shlomo. Ethics of Our Fighters: A Jewish View on War and Morality. Koren, 2023. Most thorough analysis in English of the halakha and theory behind Jewish military ethics. Includes a few pages on the future of war using autonomous and semi-autonomous military technologies.
-
Yisraeli, Shaul. Amud HaYemini. Jerusalem, 1966/1992. [Hebrew] Influential treatment of warfare halakha by a former chief rabbi of Israel. Chapter 9 on mishpatei melukha (royal prerogatives) is particularly relevant to questions of state authority over military technology.
-
Various. War and Peace in the Jewish Tradition, edited by Lawrence H. Schiffman and Joel B. Wolowelsky. New York: Yeshiva University Press, 2007. English surveys the state of Jewish just war theory, recognizing the challenge of applying ancient and medieval sources to modern warfare.
-
Walzer, Michael. "The Ethics of Warfare in the Jewish Tradition." Philosophia 40, no. 4 (2012): 633-641. From the author of "Just and Unjust Wars," a brief but important observation that Jewish thought about war is incomplete due to the historical absence of Jewish political sovereignty for nearly two millennia.
-
Klapper, Aryeh, Shlomo Ish-Shalom, and Michael Broyde. "Conversation: Halakhah and Morality in Modern Warfare." Meorot 6, no. 1 (2006). Three-way exchange among Orthodox scholars on contemporary warfare ethics; addresses tensions between halakhic requirements and military necessity.
AWS and Contemporary Applications
-
Grossman, Jonathan. "Jewish Perspectives on Artificial Intelligence and Synthetic Biology." Ḥakirah 35 (2024). While not focused on AWS, discusses halakhic liability frameworks for AI-caused harm, including how poskim have extended grama doctrines to hold AI owners/creators responsible.
-
Nevins, Daniel. "Halakhic Responses to Artificial Intelligence and Autonomous Machines." Rabbinical Assembly, 2019. Conservative movement responsum; pages 40-42 briefly address AWS, concluding that "the decision to take human life should never be delegated to a machine."
-
Navon, Mois. "Autonomous Weapons Systems and Battlefield Dignity: A Jewish Perspective." In Alexa, How Do You Feel about Religion? Technology, Digitization and Artificial Intelligence in the Focus of Theology, edited by Anna Puzio, Hendrik Klinge, and Nicole Kunkel, 207-232. Darmstadt: WBG, 2023. The only sustained Jewish treatment of AWS and potential ethical frameworks, such as questions of human dignity.
Secular Ethics Literature and Background on AWS
-
Asaro, Peter. "Autonomous Weapons and the Ethics of Artificial Intelligence." In Ethics of Artificial Intelligence, edited by S. Matthew Liao, 212-236. Oxford: Oxford University Press, 2020. Leading philosophical critique of AWS; argues that respecting human dignity requires recognizing targets as human and consciously deciding that killing is justified. Identifies three requirements for morally legitimate killing that machines cannot satisfy. Essential interlocutor for Jewish responses.
-
Sparrow, Robert. "Robots and Respect: Assessing the Case Against Autonomous Weapon Systems." Ethics & International Affairs 30, no. 1 (2016): 93-116. Develops the "interpersonal relationship" requirement for legitimate killing, drawing on Thomas Nagel; argues AWS are mala in se because they violate respect for the humanity of enemies. The dignity argument that Navon attempts to refute.
-
Leveringhaus, Alex. Ethics and Autonomous Weapons. London: Palgrave Macmillan, 2016. Balanced treatment of consequentialist and deontological arguments; discusses how "black box" recording could address some accountability concerns. Useful for understanding the range of positions in secular debate.
-
Sharkey, Amanda. "Autonomous Weapons Systems, Killer Robots and Human Dignity." Ethics and Information Technology 21 (2018): 75-87. Surveys dignity-based arguments against AWS; distinguishes different conceptions of dignity at stake. Helpful taxonomy for Jewish analysis of which dignity concepts are relevant.
-
Walzer, Michael. Just and Unjust Wars. 1977 (latest edition: Basic Books, 2015). Primary and highly influential text on war and military ethics.
Overview
Brain-computer interface (BCI) devices establish a direct communication pathway between the human brain and an external computational system, bypassing the body's ordinary sensorimotor channels. The technology ranges from non-invasive electroencephalographic headsets that detect surface-level neural signals to fully implanted microelectrode arrays that read and stimulate individual neurons. BCIs may be able to restore a person's lost physical function, such as by enabling a paralyzed patient to move a cursor, type a sentence, or control a prosthetic limb through thought alone. Aside from these clinical applications, they also hold promise for "human enhancement" or *transhumanism, either by augmenting cognitive capacities (e.g., enhancing memory, accelerating learning, or granting direct access to networked information) or by the ability to control various machines with thought alone, through a purely neural interface.
Jewish law treats the human body not as the private property of the individual but as a trust held on behalf of its Creator. The prohibition against self-harm (ḥovel be-atzmo) derives from this theological premise. BT Bava Qamma 91b records a dispute over whether a person may wound himself: R. Eliezer permits it, but the Sages prohibit it, and Maimonides (Hilkhot Ḥovel u-Mazziq 5:1) codifies the prohibition. The Radbaz (R. David ben Zimra, d. 1573), in a responsum on whether one is obligated to sacrifice a limb to save another's life (Responsa 3:627), grounds the prohibition in the principle that "your life is not your own"—the body belongs to God, and one may not damage what belongs to another without permission. Surgical implantation of a BCI device necessarily involves a deliberate incision into the skull and insertion of foreign material into neural tissue. Under what conditions does Jewish law permit such intervention?
The distinction between restoration and enhancement is not merely technological but, from the standpoint of Jewish thought, morally and halakhically fundamental. A BCI that restores speech to a locked-in patient activates the obligation to heal; a BCI that grants a healthy person superhuman recall raises questions about the integrity of the human person as divinely fashioned.
When it comes to medical interventions, even invasive or potentially risky procedures, mainstream Jewish practice accepts all such treatments as included in the principle that pikuaḥ nefesh (the preservation of life) overrides virtually all prohibitions (BT Yoma 85b), and on the broader biblical-rabbinic mandate to heal the sick. BT Bava Qamma 85a derives the physician's license to practice from Exodus 21:19 ("he shall surely heal"), and Maimonides (Commentary on the Mishnah, Pesaḥim 4:9) sharply rebukes those who treat medicine as an impious encroachment on the divine prerogative, comparing them to those who would refuse to eat because God alone sustains life. For a patient with severe paralysis, locked-in syndrome, or treatment-resistant epilepsy, the implantation of a BCI presumably falls squarely within this mandate to heal, whether it is to restore a lost physical function or to alleviate physical suffering. It is only a short jump to also include psychiatric disorders that may prevent a person from otherwise functioning normally in society.
The harder question arises with enhancement BCIs that may be implanted not to restore a lost capacity but to exceed natural human abilities. Perhaps if the Torah's concept of tzelem Elohim (the image of God) is understood through the human capacity for rational thought, as indicated by Maimonides in Guide of the Perplexed 1:1 (and others), then augmenting cognitive capacity through a BCI might seem to amplify rather than diminish that divine tzelem. On the other hand, the entire account of God's creation of man can be seen as God providing humanity with a certain natural imprint that would be distorted by such excessive tampering of a person's most defining characteristics.
We may be able to transform this thorny theological question of transhumanism into a halakhic one in the following way: whether cognitive enhancement realizes or distorts the divine image might depend upon whose action is it when a BCI translates neural activity into external effect? The transhumanist vision assumes a seamless continuity between intention and technologically mediated result, but perhaps an alternative view would distinguish between the naturally occurring biological systems and artificial extensions of their architecture. In this way, we may ask to what extent does Jewish law recognize the direct consequences of a person's mind to be attributed to them.
The halakhic literature on Grama (indirect action) and Shabbat may thus provide a framework for this question, one whose implications extend well beyond Sabbath observance to the general problem of human agency in technologically mediated contexts. The Talmud (BT Bava Metzia 90b) records the following dispute: if one muzzles an animal, a prohibition usually serious enough to incur lashes, by using only one's voice, is this a punishable "action"? Rabbi Yochanan says yes, because "the twisting of his lips constitutes an action" (akimat piv havi ma'aseh), but Reish Lakish disagrees: "sound is not an action" (kala lo havi ma'aseh). The halakha follows R. Yochanan, but Tosafot limit this application to the case of muzzling an animal because "through his speech he performs an action" (be-dibburo ka'avid ma'aseh)—the speech counts as action because it produces a concrete physical result in the world. In other words, merely speaking words does not constitute a physical action, unless those words resulted in a physical occurrence, such as when an animal is scolded such that it is too afraid to eat the food in front of it. This distinction of Tosafot is informative for BCI technology. Neural activity on its own is certainly not an "action," but the real question is whether the neural signal, when it produces a concrete result through a technological intermediary, constitutes the person's action; Tosafot would appear to say yes.
One might suppose that thought-controlled devices resemble the talmudic discussions of action through supernatural means, which is discussed in the halakhic literature. Rabbi Moshe Sofer (Hatam Sofer Responsa 6:29), for instance, discusses whether Moses could have written Torah scrolls on Shabbat through a divine name, and the Halakhot Ketanot (2:98) considers whether one who kills a person through sorcery bears liability for murder. But R. Asher Weiss (Minchat Asher, Parashat Vayakhel 5775) disagrees with such comparisons: effects produced through supernatural or magical (segulit) means might be considered "the work of Heaven and the act of God," but "systems that were built and developed by human hands are like an axe in the hand of the woodcutter and a tool in the hands of the craftsman—and his actions they are." A BCI, as a human-engineered system, surely falls into this category. Even if it relies on recently (or as-yet-to-be) developed technology, it is fundamentally just another way of using natural forces. This can also be compared to how the Talmud (BT Bava Kamma 60a) invokes the principle of melekhet mahshevet [creative or thoughtful/intentional artifice] to explain why one who winnows grain on Shabbat is liable even when the wind does most of the work. The BCI functions analogously to the wind, utilizing an external but natural force that the user expects to rely upon to accomplish a certain result.
Either way, this application of the laws of shabbat and the halakhot of direct causation weigh towards assuming that BCI-mediated effects are fully attributable to the human user, who would bear full moral and halakhic responsibility for what the device does. (See *Shabbat; the view represented here is not necessarily unanimously agreed upon). Returning to the question of whether or not such enhancements would be permissible for a healthy person, we may reason that the BCI is an extension of the human person and not a mutilation thereof.
There is, however, a totally separate concern that arises from technological enhancements to human cognition: its possible use in *Torah study, or talmud Torah. Although such technologies remain, as of this writing, in the realm of science fiction, there is a possibility that an advanced BCI could upload the entire corpus of Torah literature into a person's brain, circumventing the need to "toil in Torah," as the blessing states. Rabbi Josh Flug raises this question: would gaining the knowledge base of a great Torah scholar, without dedicating the time and effort to actually learn those texts, truly fulfill the mitzvah?
The Vilna Gaon's interpretation of a well-known aggadic passage suggests that it would not. The Talmud (BT Niddah 30b) relates that a fetus is taught the entire Torah in utero, only to have an angel cause it to forget everything at birth. The Vilna Gaon (Commentary to Proverbs 16:26 and cited by his brother in Ma'alot ha-Torah) explains this strange narrative through the talmudic dictum yagati u-matzati ta'amin, "if someone says he toiled and found, believe him" (BT Megillah 6b). The purpose of Torah study, he argues, is not merely to acquire information but to toil (ameilut) in learning so that the experience becomes transformative, shaping the learner's character and conduct. Torah knowledge gained without such toil lacks this transformative quality. R. Ḥayyim of Volozhin, the Vilna Gaon's primary student, tells how this was actually relevant to the Gaon in a practical sense. In his introduction to Sifra de-Tzeni'uta, R. Hayyim relates that angels (maggidim) approached the Gaon offering to reveal hidden secrets of the Torah, but he refused; he wanted to learn Torah only through toil. This anecdote suggests that Torah knowledge acquired too easily, let alone technologically implanted information, would fail to satisfy the religious value of ameilut ba-Torah.
However, one might argue that the Vilna Gaon's objection was specifically to the bypassing of cognitive effort, not to the possession of knowledge itself. If a BCI could provide instantaneous access to the Torah's textual corpus while still requiring the user to engage in the interpretive and analytical work of Torah study, perhaps a different form of ameilut could emerge. The Talmud (BT Berakhot 64a; Horayot 14a) debates whether "Sinai" (comprehensive knowledge) or "oker harim" (the ability to "uproot mountains," i.e., analytical skill) is the more valuable quality in a Torah scholar. The Gemara concludes that Sinai takes priority, because "all require the master of wheat," meaning that anyone, no matter how creative, is ultimately reliant upon the raw material of knowledge. Yet commentators throughout the centuries have noted that this calculus may shift as access to texts becomes easier, whether due to the changes wrought by the *Printing Press or computer databases. A BCI that rendered the knowledge question moot might decisively tip the balance toward the creative and analytical dimensions of Torah study.
Indeed, Jewish tradition has always valued ḥiddush (novel interpretation) as an essential component of Torah study. Rabbi Pinḥas Horowitz (Panim Yafot to Parashat Ki Tisa) interprets the liturgical phrase ve-ten ḥelkeinu be-Toratekha ("grant us our portion in Your Torah") as a prayer to accomplish one's individually unique share in Torah wisdom. Rav Kook taught that producing a novel Torah insight (ḥiddush) constitutes a more impactful revelation of God's will than merely absorbing existing knowledge, because "the light that is renewed through the connection of the Torah to one soul is not the same as the light born from its connection to another soul" (Orot ha-Torah 2:1) and similar statements can be found throughout Jewish literature of the past several centuries. If the ameilut of the past was the labor of acquiring knowledge, the ameilut of a BCI-enhanced future might be the labor of generating genuinely novel Torah thought, whatever that may look like.
Primary Sources
-
BT Bava Qamma 91b. Talmudic dispute over whether a person may wound himself; R. Eliezer permits, the Sages prohibit. Foundation for the halakhic treatment of bodily integrity and permissible medical intervention.
-
BT Berakhot 64a; Horayot 14a. Debate over whether Sinai (comprehensive knowledge) or oker harim (analytical brilliance) is more valuable in a Torah scholar; the Gemara concludes that "all require the master of wheat," but this calculus may shift as access to information becomes easier.
-
BT Yoma 85b. Establishes that pikuaḥ nefesh (preservation of life) overrides virtually all prohibitions; foundational for permitting invasive medical procedures including BCI implantation for therapeutic purposes.
-
BT Bava Qamma 85a; Exodus 21:19. Derives the physician's license to practice medicine from the biblical phrase "he shall surely heal"; basis for the rabbinic mandate to heal the sick.
-
BT Bava Metzia 90b; Tosafot ad loc. Dispute over whether sound constitutes a halakhic "action": R. Yochanan holds that "the twisting of his lips constitutes an action," while Reish Lakish disagrees. Tosafot limit R. Yochanan's ruling to cases where speech produces a concrete physical result—directly informative for whether BCI-mediated neural signals count as the user's action.
-
Maimonides, Commentary on the Mishnah, Pesaḥim 4:9. Sharply rebukes those who treat medicine as impious encroachment on divine prerogative, comparing them to those who would refuse to eat because God alone sustains life.
-
Maimonides, Guide of the Perplexed 1:1. Identifies tzelem Elohim (the image of God) with the human capacity for rational thought, raising the question of whether cognitive augmentation amplifies or distorts the divine image.
-
Maimonides, Hilkhot Ḥovel u-Mazziq 5:1. Codifies the prohibition against self-harm, establishing the normative halakhic position that one may not injure one's own body.
-
Radbaz (R. David ben Zimra), Responsa 3:627. Discusses whether one is obligated to sacrifice a limb to save another's life; grounds the prohibition of self-harm in the principle that the body belongs to God—"your life is not your own."
-
Hatam Sofer (R. Moshe Sofer), Responsa 6:29. Discusses whether Moses could have written Torah scrolls on Shabbat through invocation of a divine name; raises the question of action through non-standard means.
-
Rabbi Yisrael Ya’akov Chagiz, Halakhot Ketanot 2:98. Writes that one who kills through sorcery or the Name of God bears liability for murder.
-
BT Bava Kamma 60a. Invokes melekhet mahshevet (creative/intentional artifice) to explain liability for winnowing on Shabbat even when the wind does most of the work; analogous to BCI as an external but natural force the user relies upon.
-
BT Niddah 30b. Aggadic account of a fetus being taught the entire Torah in utero, only to have an angel cause it to forget at birth; foundational text for the question of whether Torah knowledge gained without toil fulfills the mitzvah.
-
BT Megillah 6b. "If someone says he toiled and found, believe him" (yagati u-matzati ta'amin); the Vilna Gaon interprets this as establishing that the purpose of Torah study is the toil itself, not merely the acquisition of information.
-
Vilna Gaon, Commentary to Proverbs 16:26. Explains the narrative of BT Niddah 30b through the principle that Torah study requires transformative toil (ameilut), not merely informational acquisition.
-
R. Ḥayyim of Volozhin, Introduction to Sifra de-Tzeni'uta. Relates that angels (maggidim) offered to reveal hidden Torah secrets to the Vilna Gaon, but he refused—he wanted to learn Torah only through his own toil.
-
R. Pinḥas Horowitz, Panim Yafot to Parashat Ki Tisa. Interprets the liturgical phrase ve-ten ḥelkeinu be-Toratekha ("grant us our portion in Your Torah") as a prayer for each person's individually unique share in Torah wisdom.
-
Rav Kook, Orot ha-Torah 2:1. Teaches that producing a novel Torah insight (ḥiddush) constitutes a uniquely impactful revelation, because "the light that is renewed through the connection of the Torah to one soul is not the same as the light born from its connection to another soul."
-
R. Asher Weiss, Minchat Asher, Parashat Vayakhel 5775. Distinguishes between effects produced through supernatural means ("the work of Heaven") and those produced through human-engineered systems, which are "like an axe in the hand of the woodcutter"—directly relevant to classifying BCI-mediated actions as the user's own.
Secondary Sources
Jewish Medical Ethics and the Integrity of the Body
-
Jakobovits, Immanuel. Jewish Medical Ethics: A Comparative and Historical Study of the Jewish Religious Attitude to Medicine and Its Practice. Bloch, 1959. Founding work of the field; provides a systematic halakhic treatment of the physician's mandate to heal, the prohibition of self-harm, and related issues.
-
Steinberg, Avraham. Encyclopedia of Jewish Medical Ethics. Trans. Fred Rosner. 3 vols. Feldheim, 2003. The most comprehensive reference in Jewish medical ethics, covering numerous obligations and prohibitions relating to medicine and bioethics.
-
Bleich, J. David. Bioethical Dilemmas: A Jewish Perspective. Vol. 1. Ktav, 1998. Rigorous halakhic analysis on the physician's obligation, permissibility of risky treatments, and the distinction between therapeutic and elective procedures with a variety of modern applications.
Torah Study, Ameilut, and Cognitive Enhancement
-
Flug, Josh. "Artificial Intelligence and Halacha: Navigating the New Frontier Across the Four Sections of Shulchan Aruch." Benjamin and Rose Berger Torah To-Go, Kislev 5785 (2024). Directly addresses BCI technology and Torah study in the Yoreh De'ah section; argues that the Vilna Gaon's refusal of angelic instruction and the talmudic emphasis on ameilut raise serious questions about whether BCI-implanted Torah knowledge fulfills the mitzvah.
-
Hollander, Max. "Ameilut in the Age of AI." The Lehrhaus, 2025. Explores the role of physicality in Torah learning, the religious imperative of ḥiddush (novel interpretation), and what ameilut means when information becomes instantly accessible; draws on R. Soloveitchik, Rav Kook, and the Tanya to argue that Torah study is irreducibly embodied and personally transformative.
Brain-Computer Interfaces: Science, Ethics, and Policy
-
Yuste, Rafael, et al. "Four Ethical Priorities for Neurotechnologies and AI." Nature 551 (2017): 159–163. Foundational article that launched the neurorights movement; proposes that privacy, identity, agency, and equality must be safeguarded as BCIs advance from clinical restoration to cognitive enhancement.
-
Ienca, Marcello and Roberto Andorno. "Towards New Human Rights in the Age of Neuroscience and Neurotechnology." Life Sciences, Society and Policy 13:5 (2017). Proposes four new human rights—mental privacy, mental integrity, psychological continuity, and cognitive liberty—as a framework for governing neurotechnologies including BCIs.
-
Burwell, Sasha, Matthew Sample, and Eric Racine. "Ethical Aspects of Brain Computer Interfaces: A Scoping Review." BMC Medical Ethics 18:60 (2017). Comprehensive review systematically mapping the BCI ethics literature across concerns of personhood, autonomy, stigma, privacy, research ethics, safety, responsibility, and justice.
-
Goering, Sara, et al. "Recommendations for Responsible Development and Application of Neurotechnologies." Neuroethics 14 (2021): 365–386. Detailed, actionable recommendations for BCI researchers, clinicians, and policymakers; emphasizes user-centered design, engagement with disabled communities, and attention to justice and access.
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Overview
The "hard problem of consciousness"—explaining why and how physical processes give rise to subjective experience, to there being "something it is like" to be a creature (Nagel 1974; Chalmers 1996)—is the most central question in contemporary philosophy of mind. It is also, for those thinking about artificial intelligence, perhaps the most consequential: if consciousness is what confers moral status, then whether AI systems can be conscious determines whether they can be moral patients deserving of ethical consideration, or merely sophisticated tools.
Philosophers and cognitive scientists have developed competing frameworks for understanding mind and its relationship to computation. Functionalist approaches hold that mental states are defined by their causal roles—their relationships to inputs, outputs, and other mental states—such that any system implementing the right functional organization would possess genuine mental states, regardless of substrate (Thagard 2005, 2019). On this view, sufficiently sophisticated AI could in principle be conscious. Behaviorist and deflationary accounts go further, suggesting that consciousness simply is sophisticated information processing, or that "consciousness" names nothing over and above certain functional capacities (Dennett 1991). Against these views, John Searle's Chinese Room argument (1984) contends that syntax (rule-governed symbol manipulation) can never produce semantics (genuine understanding): a computer executing a program may simulate intelligence without possessing it, just as someone following rules to manipulate Chinese characters need not understand Chinese. Searle has applied this argument directly to contemporary AI, arguing that even sophisticated systems lack genuine consciousness (Searle 2015). For accessible overviews of these debates and their implications for AI, see Thagard (2021) and Bentley et al. (2018).
Jewish thought, however, did not develop a concept of "consciousness" in the modern sense that dominates contemporary philosophy. The term itself is a post-Cartesian innovation, emerging from Locke's definition of consciousness as "the perception of what passes in a man's own mind" (1690). Prior to the Enlightenment, the relevant category was soul—and the Jewish discourse on soul, while rich and multilayered, operates with different assumptions and toward different ends than the modern philosophy of mind. See entries on Humans, Souls and Minds, and Intentionality.
That said, certain parallels can be drawn. Philosophers of mind often distinguish between phenomenal consciousness (subjective experience, qualia) and access consciousness (the functional availability of information for reasoning, reporting, and behavior control). Some have further distinguished between first-order consciousness (awareness of external stimuli) and second-order or "higher-order" consciousness (awareness of one's own mental states, reflexivity, inner speech). Later kabbalistic and hasidic sources distinguish between multiple levels of soul—nefesh, ruach, neshamah, ḥayah and yeḥidah—and associate different capacities with each. Some Jewish thinkers linked the distinctively human soul to da'at (knowledge/understanding) and dibbur (speech), capacities that track loosely onto what philosophers now call higher-order cognition. Some have proposed mapping these concepts onto artificial minds (Navon 2024a, 2024b), but these readings and their ethical implications are certainly debatable.
Secondary Sources
Philosophy of Mind
-
Chalmers, David J. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press, 1996. The canonical formulation of the "hard problem"; argues that consciousness cannot be explained by functional or computational accounts alone.
-
Dennett, Daniel C. Consciousness Explained. Little, Brown, 1991. The leading functionalist account; argues that consciousness is sophisticated information processing, with implications for AI possibility.
-
Nagel, Thomas. "What Is It Like to Be a Bat?" Philosophical Review 83, no. 4 (1974): 435-450. Classic argument that subjective experience cannot be captured by objective, third-person accounts.
-
Searle, John R. Minds, Brains and Science. Harvard University Press, 1984. Main text on consciousness and the philosophy of mind. Presents the Chinese Room thought experiment to argue that computation alone cannot produce understanding.
-
Thagard, Paul. Brain-Mind: From Neurons to Consciousness and Creativity. Oxford University Press, 2019. Integrates neuroscientific and philosophical approaches.
Modern AI and Consciousness
-
Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger. "Should We Fear Artificial Intelligence?" European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018. Available online. Policy-oriented overview of AI consciousness and risk.
-
Searle, John R. "Consciousness in Artificial Intelligence." Talks at Google, 2015. YouTube video. Searle applies his arguments to contemporary AI systems.
-
Thagard, Paul. Bots and Beasts: What Makes Machines, Animals, and People Smart? MIT Press, 2021. Accessible treatment of intelligence across biological and artificial systems.
Jewish Thinking on Consciousness and Artificial Intelligence
-
Lorberbaum, Yair. In God's Image: Myth, Theology, and Law in Classical Judaism. Cambridge University Press, 2015. The definitive study of tzelem Elohim (image of God) in rabbinic and medieval Jewish thought; essential for understanding how Jewish sources conceptualized human distinctiveness without recourse to "consciousness."
-
Mittleman, Alan L. Human Nature & Jewish Thought: Judaism's Case for Why Persons Matter. Princeton University Press, 2015. Survey of modern Jewish thinkers on human nature and its ethical implications.
-
Navon, Mois. "To Make a Mind—A Primer on Conscious Robots." Theology and Science 22, no. 1 (2024a): 224-241. https://doi.org/10.1080/14746700.2023.2294530. Proposes mapping Jewish soul categories onto orders of phenomenal consciousness.
-
Navon, Mois. "Let Us Make Man in Our Image: A Jewish Ethical Perspective on Creating Conscious Robots." AI Ethics 4 (2024b): 1239-1250. https://doi.org/10.1007/s43681-023-00328-y. Expounds upon the framework proposed in Navon 2024a and develops its ethical implications.
Coming Soon!
Coming Soon!
Overview
The rapid expansion of artificial intelligence infrastructure imposes environmental costs that are real but frequently mischaracterized, in both directions. On the level of individual use, the numbers are modest: OpenAI reports that an average ChatGPT text query consumes about 0.34 watt-hours, roughly what an oven uses in one second (Altman 2025), and Google reports that a median Gemini text prompt uses 0.24 watt-hours, equivalent to watching television for nine seconds (Google 2025). But AI systems operate at extraordinary scale and the aggregate impact is substantial. According to the Lawrence Berkeley National Laboratory, U.S. data center electricity consumption more than tripled between 2014 and 2023, rising from roughly 60 to 176 terawatt-hours, driven largely by the growth of GPU-accelerated AI servers (LBNL 2024). The same report projects U.S. data center consumption reaching 250 to 400 terawatt-hours by 2028, equivalent to the annual electricity use of a quarter of American households. The International Energy Agency projects that global data center electricity consumption could surpass 1,000 terawatt-hours by 2026, which would place data centers between Japan and Russia in total electricity demand (IEA 2024). Water consumption presents a parallel concern: U.S. data centers consumed an estimated 17.5 billion gallons of water directly for cooling in 2023, representing roughly 0.3 percent of the total public water supply (LBNL 2024; Ren et al. 2024). Here too, an individual query's water footprint is tiny — Ren and his colleagues at UC Riverside found that a back-and-forth conversation of about thirty exchanges with GPT-3 consumes the equivalent of a half-liter bottle of water, of which only about 12 percent is potable water used directly for cooling, the rest being non-potable water consumed in electricity generation — but the cumulative figures are projected to grow sharply.
These environmental costs, however, must be assessed against the realistic alternatives. Some recent analyses argue that the power supply challenge is more tractable than commonly assumed: gas turbine manufacturers plan to produce capacity exceeding 200 gigawatts cumulatively through 2030, demand-response strategies could unlock 76 to 126 gigawatts of spare grid capacity, and emerging solar and geothermal pathways offer additional supply (Epoch AI 2025). More fundamentally, the environmental question is not simply how much energy AI uses, but how much it uses relative to what it replaces. A study published in Scientific Reports found that AI systems produce 130 to 1,500 times less carbon dioxide per page of text than human writers, and 310 to 2,900 times less per image than human illustrators (Tomlinson et al. 2024), but of course this comparison does not account for differences in quality or for the fact that displaced humans continue to consume energy regardless.
Still, the real environmental concern may be less about AI's per-task efficiency, which may indeed be superior to the human alternative, if judged on quantity alone. Instead, it is about the sheer volume of new demand that AI creates, tasks that would simply never have been performed at all absent the technology. This distinction matters for the Jewish ethical analysis that follows: the question is not merely whether AI is wasteful, but whether the uses to which it is put justify the resources consumed, and whether the environmental burdens are distributed equitably.
Jewish tradition provides a rich framework for evaluating these costs. The foundational text is Deuteronomy 20:19-20, which prohibits the destruction of fruit-bearing trees during a siege: "When you besiege a city... you shall not destroy its trees by wielding an axe against them; for you may eat of them, and you shall not cut them down." The rabbis extended this prohibition, known as bal tashchit, well beyond its original military context into a general principle against the wanton destruction of any useful resource. Maimonides codifies it broadly (Hilkhot Melakhim 6:10): "Not only trees, but anyone who breaks vessels, tears garments, destroys a building, stops up a spring, or wastes food destructively violates bal tashchit." The Sefer ha-Hinukh (Mitzvah no. 529) explains the underlying rationale: "The righteous and people of good deeds... do not waste even a grain of mustard in the world, and they are distressed by any destruction or waste they see; and if they are able to save anything from destruction, they will do so with all their effort."
This ethic of conservation extends to the very purpose of the human being in the created world: Genesis 2:15 describes Adam as placed in the Garden "to work it and to guard it" (le'ovdah u'leshomrah), and a remarkable midrash in Kohelet Rabbah 7:13 imagines God leading Adam past the trees of the Garden and warning: "See My works, how beautiful and praiseworthy they are. Everything I have created, I created for you. Pay attention that you do not damage or destroy My world, for if you damage it, there is no one to repair it after you." The earth, moreover, is not humanity's to dispose of freely: "The earth is the Lord's and the fullness thereof" (Psalm 24:1), and the land "shall not be sold permanently, for the land is Mine" (Leviticus 25:23). Human beings are stewards, not owners, and their use of the natural world must be accountable to the One who entrusted it to them (R. Kook, Hazon haTzimhonut vehaShalom 1961, p. 207).
Yet bal tashchit is not an absolute prohibition against all destruction or resource consumption; it prohibits specifically wanton or purposeless destruction. The Talmud (Bava Kamma 91b-92a) discusses the scope of the prohibition and permits destruction that serves a legitimate human need. If cutting down a fruit tree is necessary because the land is needed for building, or because the tree causes damage to more valuable property, it is permitted. Maimonides (Hilkhot Melakhim 6:8) accordingly rules that a fruit tree may be cut down if its wood is more valuable than its fruit, or if it damages other trees or neighboring fields. The Rosh (Bava Kamma 8:15) adds that one may cut down a fruit tree if one needs the location for building, though the Ḥatam Sofer (ḤM 102) and subsequent authorities generally limit this leniency to genuinely significant needs such as housing, reasoning that destroying something valuable to serve a trivial purpose remains hashchatah gemurah, "complete destruction." The Ḥavvot Ya'ir (195) goes further, permitting removal even when the tree merely blocks light or diminishes the amenity of a dwelling, since even this indirect harm renders the cutting "not a destructive act." The operative principle is proportionality: the destruction must serve a purpose whose value exceeds what is destroyed, and alternatives that accomplish the same purpose with less destruction should be preferred. (See Catastrophic and CBRN Risk regarding potentially wide-scale destruction)
Applied to AI, this framework does not yield a simple prohibition or permission. The potential benefits of artificial intelligence in medicine, scientific research, education, accessibility, and economic productivity are substantial, and a tradition that values human welfare and the alleviation of suffering cannot dismiss them. Indeed, to the extent that an AI system performs a task more efficiently than a human worker, the deployment of AI for that task may actually reduce total resource consumption, making its use not only permissible but arguably preferable under the logic of bal tashchit. The instinct of the righteous to fight against wasted resources, as the Sefer ha-Hinukh put it, may inspire one towards greater AI adoption instead of the opposite.
The question the principle poses, then, is not whether AI may consume resources at all, but whether it consumes them proportionately and whether sufficient effort is being made to minimize waste. The fact that much of AI's energy consumption is driven by novel demand complicates this analysis considerably. A data center powered by renewable energy that serves critical medical research occupies a very different moral position than one powered by fossil fuels that generates frivolous images (or worse) at industrial scales.
The halakhic tradition's insistence on distinguishing purposeful from purposeless destruction thus provides a framework not for prohibiting AI development but for demanding that its environmental costs be justified, minimized, and honestly accounted for. The principle further implies that when two paths to the same technological end exist — one more resource-intensive than the other — the more efficient path is not merely preferable but obligatory, echoing the Talmudic reasoning that prohibits the wasteful method even when the goal is legitimate.
Jewish law also addresses environmental concerns through a sophisticated body of zoning and urban planning regulations that bear directly on the siting and management of AI infrastructure. The Torah's legislation concerning the Levitical cities (Numbers 35:2-5) mandates that each city be surrounded by a belt of open space (migrash) extending one thousand cubits, which may be used neither for building nor for agriculture but must remain as open land. The Talmud (Arakhin 33b) rules that this open space is permanent: it may not be converted into built-up area nor into cultivated fields.
Rashi (on Numbers 35:3) explains that the migrash serves as noy la'ir, a beautification and amenity for the city, establishing the principle that urban areas require dedicated open space for the welfare of their inhabitants, and that such space is not "wasted" even though it could have otherwise been put to economically productive use. Rabbi Samson Raphael Hirsch, in his commentary on Numbers 35, develops this idea extensively, observing that the Torah's careful specification of distances reflects a comprehensive vision of urban planning in which the material expansion of the city is deliberately bounded by concern for its inhabitants' quality of life. The migrash is a structural commitment to the proposition that economic development must not consume every available resource without reserve.
This principle resonates with contemporary debates about data center siting, in which massive industrial facilities constructed in or near residential communities consume local power and water resources, generate significant heat and noise, and alter the character of the landscape. Similarly, the Mishnah in Bava Batra (2:8-9) establishes detailed rules for distancing harmful or noxious industries from population centers: a tannery, for example, must be placed at least fifty cubits from a city and only on a side from which prevailing winds carry odors away from inhabited areas. These rules are not merely prudential; they are enforceable communal obligations. The principle underlying these halakhot, codified by Maimonides in Hilkhot Shekhenim (chapters 10-11), is that those who generate harmful externalities bear the obligation to mitigate them, and the community has the right to compel such mitigation. The Talmud's discussion in Bava Batra 7b-8a of communal obligations regarding public infrastructures and the thousand-year tradition that builds upon those laws further illustrates the tradition's understanding that large-scale development requires collective deliberation and equitable allocation of costs and benefits. Applied to AI infrastructure, this logic suggests that the communities bearing the environmental burden of data centers, or even just geographic proximity, have a legitimate claim to participate in decisions about siting and to share in the benefits the infrastructure generates.
Primary Sources
-
Genesis 1:28. "Fill the earth and subdue it." The mandate for human stewardship of the natural world, traditionally understood not as license for unlimited exploitation but as delegated authority carrying responsibility. The verb kivshuha ("subdue it") implies purposeful management, not wanton consumption.
-
Genesis 2:15. "The Lord God took the man and placed him in the Garden of Eden, le'ovdah u'leshomrah — to work it and to guard it." The foundational text for the dual mandate of productive use and conservation; the human role is both to develop the world and to preserve it.
-
Leviticus 25:23. "The land shall not be sold permanently, for the land is Mine; you are but strangers and sojourners with Me." Establishes that human ownership of natural resources is conditional and custodial, not absolute.
-
Numbers 35:2-5. The legislation requiring that Levitical cities be surrounded by a belt of open space (migrash) of specified dimensions. The migrash may not be built upon or cultivated, establishing a legal precedent for preserving open land around urban centers as a permanent structural feature of responsible city planning.
-
Deuteronomy 20:19-20. "When you besiege a city... you shall not destroy its trees by wielding an axe against them; for you may eat of them, and you shall not cut them down. Is the tree of the field a man, that it should be besieged by you?" The source of the prohibition of bal tashchit, which the rabbis extended from wartime destruction of fruit trees to a comprehensive prohibition against all wanton waste.
-
Psalm 24:1. "The earth is the Lord's, and the fullness thereof; the world and those who dwell in it." The theological grounding of environmental stewardship: the natural world belongs to God, and human use of it must reflect accountability to its Owner.
-
Kohelet Rabbah 7:13. "When the Holy One created the first human, He took him and led him past all the trees of the Garden of Eden and said: 'See My works, how beautiful and praiseworthy they are. Everything I have created, I created for you. Pay attention that you do not damage or destroy My world, for if you damage it, there is no one to repair it after you.'" The most explicit midrashic statement of the duty of environmental stewardship and of intergenerational responsibility for the integrity of the created world.
-
Mishnah Bava Batra 2:8-9. Rules requiring that tanneries and other noxious industries be distanced at least fifty cubits from a city and situated so that prevailing winds carry harmful byproducts away from inhabited areas. Establishes the principle that harmful externalities must be controlled through enforceable spatial regulations.
-
Babylonian Talmud, Bava Batra 7b-8a. Discussion of the communal obligation to contribute to shared infrastructure — walls, gates, and a beit sha'ar (gatehouse). Contributions are assessed in proportion to proximity and benefit, establishing a framework for equitable distribution of costs associated with large-scale communal projects.
-
Babylonian Talmud, Bava Kamma 91b-92a. Central sugya on the scope of bal tashchit: establishes that a date palm damaging a grapevine may be removed because the grape is more valuable; discusses whether self-harm violates bal tashchit; and records the tradition that R. Ḥanina's son died because he cut down a fig tree prematurely, establishing that the prohibition carries not only legal but also spiritual gravity.
-
Babylonian Talmud, Yoma 74a. The principle that ḥatzi shi'ur (a partial measure of a prohibited act) is forbidden on a Torah level. Applied by the Beit Yitzchak and the Dovev Meisharim to bal tashchit: even partial destruction of a resource (cutting branches, not the trunk) falls within the prohibition's scope.
-
Babylonian Talmud, Arakhin 33b. The ruling that the migrash surrounding Levitical cities may not be converted into either built-up area or agricultural fields. Establishes the permanence of urban green space as a legal requirement, not merely a recommendation.
-
Maimonides, Mishneh Torah, Hilkhot Melakhim 6:8-10. The comprehensive codification of bal tashchit, extending the prohibition from trees to all forms of purposeless destruction — breaking vessels, tearing garments, demolishing buildings, stopping up springs, and wasting food. Halakhah 9 adds that a tree whose yield is too meager to justify the cost of its maintenance may be removed. Read alongside Sefer ha-Mitzvot (Negative Commandment 57), where Maimonides classifies all destruction as a Torah-level violation, this codification raises the central question of whether non-arboreal waste is prohibited by Torah law or rabbinic enactment — a question resolved by R. Asher Weiss through the distinction between the "core" and the "included" scope of a prohibition.
-
Maimonides, Sefer ha-Mitzvot, Negative Commandment 57. "Anyone who burns a garment for no purpose or breaks a vessel also transgresses lo tashchit and receives lashes." The apparently unqualified statement that all destruction violates the Torah prohibition, in tension with Hilkhot Melakhim 6:10 which assigns only rabbinic lashes for non-tree destruction.
-
Maimonides, Mishneh Torah, Hilkhot Shkhenim 10-11. Codification of the laws governing the distancing of harmful activities from residential areas, including specific requirements for the siting of noxious industries and the rights of affected neighbors to demand mitigation.
-
Sefer ha-Hinnukh, Precept 529. The rationale for bal tashchit: "The root of this commandment is known — it is to teach us to love the good and the beneficial and to cling to it, and through this, goodness will cling to us and we will distance ourselves from all that is destructive and damaging." Frames the prohibition as an expression of a broader orientation toward the flourishing rather than the degradation of the created world.
-
Shulḥan Arukh ha-Rav, Dinei Shemirat ha-Guf u-Val Tashchit 10:14-15. Rules that bal tashchit applies to ownerless property a fortiori: if the Torah prohibits destroying a non-Jew's trees during siege, how much more so ownerless resources. Also permits cutting a fruit tree that blocks light from a dwelling, following the Ḥavvot Ya'ir.
-
Ḥatam Sofer, Responsa, Ḥoshen Mishpat 102. Limits the Rosh's leniency (permitting tree removal when one needs the location) to genuinely significant needs such as housing, ruling that destroying something valuable for a trivial purpose remains hashchatah gemurah (complete destruction).
-
Noda Bi-Yehudah (R. Yeḥezkel Landau), Responsa, Yoreh De'ah 10. Understands Maimonides' assignment of makkot mardut for non-tree destruction as indicating that the extension of bal tashchit beyond fruit trees is rabbinic, not Torah-level. This reading, shared by the Ḥayyei Adam (11:32) and the Maharit Bassan (101), is challenged by R. Asher Weiss's analysis of Sefer ha-Mitzvot.
-
R. Avraham Yitzhak haKohen Kook, Hazon haTzimhonut vehaShalom (Lahai Ro'i, Jerusalem, 1961, p. 207).
Secondary Sources
Jewish Ecology
-
Tirosh-Samuelson, Hava, ed. Judaism and Ecology: Created World and Revealed Word. Cambridge, MA: Harvard University Press, 2002. Comprehensive collection of essays examining the ecological dimensions of Jewish thought from biblical through contemporary periods; includes treatments of bal tashchit, stewardship, and the tension between dominion and responsibility.
-
Benstein, Jeremy. The Way Into Judaism and the Environment. Woodstock, VT: Jewish Lights Publishing, 2006. Accessible introduction to Jewish environmental ethics, drawing on biblical, rabbinic, and contemporary sources; particularly useful for its treatment of the conceptual foundations of stewardship and the relationship between holiness and ecological responsibility.
-
Vogel, David. "How Green Is Judaism? Exploring Jewish Environmental Ethics." Business Ethics Quarterly 11, no. 2 (2001): 349-63. Critical assessment of the strength and limitations of Jewish environmental teachings; argues that while the tradition contains important environmental resources, it does not straightforwardly yield a modern environmental ethic without significant interpretive work.
-
Lamm, Norman. "Ecology in Jewish Law and Theology." In Faith and Doubt: Studies in Traditional Jewish Thought, 162-85. New York: KTAV, 1971. Early and influential Orthodox rabbinic engagement with environmental ethics; argues that bal tashchit and the stewardship mandate of Genesis 2:15 together constitute a comprehensive Jewish approach to ecological responsibility.
-
Rakover, Naḥum. Environmental Protection: A Jewish Perspective. Israel, Institute of the World Jewish Congress, 1996. Collection and analysis of relevant Jewish sources on the values and parameters of environmentalism.
AI Energy Consumption and Environmental Impact
-
Lawrence Berkeley National Laboratory. 2024 United States Data Center Energy Usage Report. Berkeley, CA: LBNL, 2024. The most authoritative source on U.S. data center energy consumption; documents the tripling of data center electricity use from approximately 60 TWh in 2014 to 176 TWh in 2023, driven largely by GPU-accelerated AI servers, and projects consumption of 250-400 TWh by 2028.
-
International Energy Agency. Electricity 2024: Analysis and Forecast to 2026. Paris: IEA, 2024. Comprehensive analysis of global electricity demand trends including projections for data center consumption; estimates that global data center electricity use could exceed 1,000 TWh by 2026.
-
"Is Almost Everyone Wrong About America's AI Power Problem?" Gradient Updates (Epoch AI), 2025. Data-driven analysis arguing that the U.S. power supply challenge posed by AI expansion is more tractable than commonly assumed, based on projected gas turbine manufacturing capacity, demand-response potential, and alternative energy pathways. Valuable for its detailed quantitative framework, though it does not address carbon emissions, water consumption, or localized environmental burdens.
-
Ren, Shaolei, et al. "Making AI Less 'Thirsty': Uncovering and Addressing the Secret Water Footprint of AI Models." Communications of the ACM 67, no. 12 (2024). Presents a methodology for estimating AI's total water footprint, including both operational (cooling) and embodied (electricity generation) water; finds that a thirty-exchange conversation with GPT-3 consumes approximately a half-liter of water, of which roughly 12 percent is potable water used directly for data center cooling.
-
Tomlinson, Bill, et al. "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans." Scientific Reports 14 (2024): 3732. Comparative analysis finding that AI systems produce 130 to 1,500 times less CO2 per page of text and 310 to 2,900 times less per image than human counterparts; the authors note that the comparison does not account for differences in output quality or for the continued energy consumption of displaced workers.
-
O'Donnell, James, and Casey Crownhart. "We Did the Math on AI's Energy Footprint. Here's the Story You Haven't Heard." MIT Technology Review, May 20, 2025. The most rigorous independent measurement of per-query energy consumption across multiple open-source AI models; finds that a text query ranges from 0.1 to 8 microwave-seconds of energy depending on model size, and that video generation is orders of magnitude more energy-intensive than text or image generation.
-
Strubell, Emma, Ananya Ganesh, and Andrew McCallum. "Energy and Policy Considerations for Deep Learning in NLP." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019): 3645-50. Foundational study quantifying the energy costs and carbon emissions of training large neural network models; demonstrated that the environmental costs of NLP research had grown dramatically and called for greater transparency in reporting computational expenses.
-
Crawford, Kate, and Vladan Joler. "Anatomy of an AI System: The Amazon Echo as an Anatomical Map of Human Labor, Data, and Planetary Resources." AI Now Institute, September 2018. An artistically presented and thoroughly researched mapping of the full material supply chain required to support a single device, from rare earth mining to energy consumption to electronic waste; provides great context and intuition for understanding the physical infrastructure behind AI systems.
Coming Soon!
Coming Soon!
Coming Soon!
Overview
The term "golem" (גולם) appears only once in the Hebrew Bible (Psalms 139:16), where it refers to the Psalmist's unformed substance as seen by God. In rabbinic literature, the word denotes a human body or formed—though not yet perfected—entity, as in Mishnah Avot 5:7, where the golem (a person lacking wisdom) is contrasted with the ḥakham (sage). In these early sources, as Moshe Idel has demonstrated, the word consistently referred to a human body or a human-shaped figure. The term came to designate an artificially created anthropoid only gradually; the earliest explicit use of "golem" for a magically animated creature appears in tenth-century Italian sources (Megillat Aḥima'atz), where it describes a corpse temporarily reanimated through the divine name. The full identification of "golem" with the magically created anthropoid became standard only by the seventeenth century.
Even if they did not use the term, however, the rabbis of the Talmud still discussed the possibility of creating artificial humans. A key passage is Sanhedrin 65b, which reports that Rava created a man (gavra) and sent him to Rabbi Zeira, who upon discovering the creature could not speak, ordered it to "return to dust." The same section relates that Rav Ḥanina and Rav Oshaya would study Sefer Yetzirah every Sabbath eve and thereby create a calf, which they would then eat. These accounts established a lasting association between esoteric knowledge (particularly of divine names and letter combinations), creative power, and the question of what distinguishes artificial from natural life. The creature's muteness served as the touchstone of its non-human status—a theme that persists throughout the tradition and raises enduring questions about the relationship between embodiment, cognition, and linguistic capacity.
The golem tradition developed significantly in medieval Ashkenaz, where commentators on Sefer Yetzirah—especially Eleazar of Worms and other Ḥasidei Ashkenaz—elaborated detailed rituals for anthropoid creation through letter permutation and the inscription of divine names. These texts introduced the famous motif of animating the golem by inscribing emet (truth/אמת) on its forehead and deanimating it by erasing the first letter to leave met (death/מת). This binary operation of creation and destruction through symbolic manipulation represents a striking anticipation of computational logic. The famous legend of Maharal of Prague and his protective golem, despite its cultural ubiquity, is a nineteenth-century invention with no basis in contemporaneous sources.
The golem has served as a lens for thinking about artificial intelligence since at least the 1960s, when Norbert Wiener titled his meditation on the ethical implications of cybernetics God & Golem, Inc. (1964), and in 1965, Gershom Scholem explicitly compared the golem to the computer in his address at the Weizmann Institute.
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!
Overview
Jews’ engagement with artificial technologies is, by necessity, as old as Judaism itself; the earliest biblical passages discuss products of human industriousness (e.g., Genesis 4:20-21). Thus, historians may utilize tools such as archeology to understand the material landscape of past Jewish (and non-Jewish) societies to better appreciate the role of technology in their lives and interpret their texts accordingly (Hezser 2010). When it comes to the question of how new technologies impact Jewish law or custom, it would not be an exaggeration to say that Jewish legal writings on the topic amount to thousands upon thousands of books. Zomet, a single Israeli organization dedicated to such studies, has (as of this writing) published 45 volumes of collected articles, and merely perusing through its list provides a good overview of the rabbinic discourse on technology over the past century. A noteworthy recent addition to this massive library is Ziring's halakhic analysis of communications technology (Ziring 2024), which bears directly on questions relating to modern media and, by extension, AI-mediated communication.
However, nearly all of this halakhic literature is preoccupied with the minutiae of how specific technologies impact or interact with various details of Jewish law; someone uncharitable may characterize it as a million variations upon the question “may this device be used on the Shabbat?” The question of how Jews reacted theologically to the innovations that have made our twenty-first-century world unrecognizable to our ancestors is shockingly understudied, even in the context of medieval and early modern attitudes generally (White 1962, 1978). A few smaller treatments of the topic (Lubin 2016, Perl 2022, Navon 2024) can help guide future scholarship, but substantial work remains to be done, especially as widespread adaptation of Artificial Intelligence makes this discussion more urgent.
Exceptions to this general scholarly lacuna are limited to studies of specific innovations, such as the Jewish reception of the *printing press or the Copernican Revolution in astronomy (Brown 2014). Another set of useful resources are biographies of figures who engaged substantively with technological and scientific questions, such as Yosef Shlomo Delmedigo, a seventeenth-century rabbi, physician, and polymath (Barzilay 1974; Adler 1997). Other Jewish inventors and tinkerers were mostly less affiliated with the rabbinic elite and therefore have smaller literary legacies, but recent scholarship has brought more of these fascinating figures to light (Patai 1994; Ruderman 1988), and additional material can be found in the growing body of work studying Jews' relationship to the sciences (Ruderman 1995; Efron 2007).
Despite the dearth of secondary literature on this crucial topic, there are ample references and remarks from classical rabbinic sources that can be marshaled to develop a Jewish worldview on technology (see Primary Sources, linked also below). The potential number of relevant sources is vast; for example, differing attitudes toward material innovation from the multifaceted halakhic literature reacting to newly invented devices (cf. Halpern 2012). Some of these discussions also center around the human role in *creation, see entry there. Navon (2024) and Goltz, Zeleznikow, and Dowdeswell (2020) offer some examples of how broader treatments of Judaism and technology may be viewed through the lens of AI ethics.
Primary Source Sheet
Secondary Sources
Jewish History and Material Culture
-
Hezser, Catherine. "The Material of Ancient Jewish Daily Life." In The Oxford Handbook of Jewish Daily Life in Roman Palestine, edited by Catherine Hezser. Oxford University Press, 2010. Comprehensive survey of rabbinic engagement with material culture; essential background on historical methodology for studying technology in Jewish antiquity.
-
Sperber, Daniel. "The Use of Archaeology in Understanding Rabbinic Materials: A Talmudic Perspective." In Talmuda De-Eretz Israel: Archaeology and the Rabbis in Late Antique Palestine, edited by Steven Fine and Aaron Koller, 321–346. De Gruyter, 2014. Methodological guide to integrating material evidence with textual sources.
Jews and Science
-
Brown, Jeremy. New Heavens and a New Earth: The Jewish Reception of Copernican Thought. Oxford University Press, 2013. Traces Jewish responses to the Copernican Revolution across halakhic, philosophical, and kabbalistic registers; demonstrates the range of strategies available for accommodating disruptive scientific innovations.
-
Efron, Noah. Judaism and Science: A Historical Introduction. Greenwood Press, 2007. Accessible survey of the full sweep of Jewish engagement with natural philosophy and science; useful orientation to the field.
-
Efron, Noah J. "Irenism and Natural Philosophy in Rudolfine Prague: The Case of David Gans." Science in Context 10, no. 4 (1997): 627–649. Study of an early modern Jewish astronomer navigating between Jewish tradition and the new science in a cosmopolitan imperial setting.
-
Harrison, Peter, ed. The Routledge Companion to Religion and Science. Routledge, 2012. Comprehensive reference work with several chapters on Jewish involvement in science and the impact of scientific developments on Jewish thought.
-
Ruderman, David B. Jewish Thought and Scientific Discovery in Early Modern Europe. Yale University Press, 1995. Foundational study of how early modern Jewish intellectuals negotiated between traditional learning and new scientific knowledge.
Modern Science and Technology in Halakhic Sources
-
Halperin, Mordechai. Refu'ah, Metzi'ut, v'Halakhah—U'lshon Ḥakhamim Marpei [Medicine, Reality, and Halakha]. 2012. [Hebrew] Responsa and essays by a leading authority on medical halakha; models how halakhic reasoning adapts to technological change.
-
Kahana, Maoz. From the Noda BiYehuda to the Ḥatam Sofer: Halakha and Thought Facing the Challenges of the Time [Hebrew]. Zalman Shazar, 2015. Intellectual history of how major halakhic authorities in the eighteenth and nineteenth centuries responded to modernity.
-
Kahana, Maoz. A Heartless Chicken and Other Wonders: Religion and Science in Early Modern Rabbinic Culture [Hebrew]. Bialik Publishing, 2021. Examines how eighteenth-century rabbis processed scientific anomalies and discoveries; directly relevant to questions of how halakha might respond to AI.
-
Tirosh-Samuelson, Hava, and Aaron W. Hughes, eds. J. David Bleich: Where Halakhah and Philosophy Meet. Brill, 2015. Essays on a major contemporary halakhic authority known for his engagement with medical ethics and technology.
Jewish Attitudes toward Technology
-
Lamm, Norman. "The Religious Implications of Extraterrestrial Life." Tradition 7, no. 4 (1965). Available online. Early Orthodox engagement with speculative technology and its theological implications; models how traditional thinkers might approach AI.
-
Lubin, Matt. "Bricks and Stones: On Man's Subdual of Nature." Kol Hamevaser 9, no. 2 (2016). Available online. Student essay exploring Jewish theological frameworks for human technological activity.
-
Navon, Mois. "A Jewish Theological Perspective on Technology (Orthodox)." In St Andrews Encyclopaedia of Theology, edited by Brendan N. Wolfe et al. University of St Andrews, 2024. Available online. Concise overview of Orthodox Jewish approaches to technology, including traditional and contemporary sources.
-
Perl, Elimelekh Y. "Jewish and Western Ethical Perspectives on Emerging Technologies." Undergraduate honors thesis, Yeshiva University, 2022. Available online. Comparative analysis of Jewish and secular ethical frameworks for evaluating new technologies.
-
White, Lynn, Jr. Medieval Religion and Technology: Collected Essays. University of California Press, 1978. Influential arguments about religious attitudes shaping technological development; frames comparative questions about Jewish distinctiveness.
-
Ziring, Jonathan. Torah in a Connected World: A Halakhic Perspective on Communication Technology and Social Media. Maggid Books, 2024. Contemporary halakhic treatment of digital technology; models the application of traditional legal reasoning to new technological contexts.
Social and Cultural Studies
-
Dowdeswell, Tracey, and Nachshon Goltz. "Cultural Regulation of Disruptive Technologies: Lessons from Orthodox Religious Communities." Journal of Transportation Law, Logistics, and Policy 88, no. 1 (2021): 33–44. Case study of how Orthodox communities govern technology adoption; applicable to communal AI governance.
-
Neriya-Ben Shahar, Rivka. Strictly Observant: Amish and Ultra-Orthodox Jewish Women Negotiating Media. Rutgers University Press, 2024. Comparative study of how traditional religious communities selectively adopt and adapt communication technologies.
Individual Figures
-
Adler, Jacob. "J.S. Delmedigo and the Liquid-Glass Thermometer." Annals of Science 54 (1997): 293–299. Technical study of an early modern Jewish scientist's contribution to instrumentation.
-
Barzilay, Isaac. Yoseph Shlomo Delmedigo (Yashar of Candia): His Life, Works, and Times. Brill, 1974. Biography of a pivotal figure who moved between traditional rabbinic learning and experimental science; illustrates tensions and possibilities in early modern Jewish technological engagement.
-
Neher, André. Jewish Thought and the Scientific Revolution of the Sixteenth Century: David Gans (1541–1613) and His Times. Oxford University Press, 1986. Study of an early modern Jewish astronomer who sought to harmonize traditional learning with new cosmology.
-
Ruderman, David B. Kabbalah, Magic, and Science: The Cultural Universe of a Sixteenth-Century Jewish Physician. Harvard University Press, 1988. Study of Abraham Yagel that explores the intersection of mysticism, medicine, and natural philosophy.
Coming Soon!
Coming Soon!
Coming Soon!
Coming Soon!