These research guides provide in-depth explorations of key topics at the intersection of artificial intelligence and Jewish thought. Each entry serves as both a mini encyclopedia article and an annotated bibliography. Together these should help offer context, analysis, and resources for further studies. The two top (red) buttons, AI Ethics and Halakha, are larger categories that include many smaller topics, each with its own short bibliography. As always, be sure to contact us with any feedback and/or additional resources you'd like to see here.
Click on any topic below to read its full entry, or use the search bar to find specific terms across all entries.
Overview
The following list surveys major questions currently animating AI ethics discourse. As distinct from the Halakha entry, none of these questions below are specifically Jewish questions, but each represents a domain where Jewish sources and frameworks may have something to contribute.
Current Questions in AI Ethics
Algorithmic Bias and Discrimination: Can AI systems be trained to avoid perpetuating or amplifying social biases, and if debiasing efforts risk falsifying historical data, how should we balance accuracy against harm? (Cf. Anti-Semitism)
Algorithmic Pricing: Is it ethical for AI-driven pricing systems to dynamically adjust prices based on individual consumer data, potentially resulting in discriminatory or exploitative outcomes?
Alignment Problem: How can we ensure that AI systems reliably pursue the goals their designers intend, rather than optimizing for unintended or harmful objectives as they become more capable?
Augmented Reality: What ethical boundaries should govern AI-enhanced perceptions of reality, particularly regarding deception, manipulation, and the blurring of physical and virtual experiences?
Autonomous Vehicles: How should self-driving systems weigh competing values such as passenger safety versus pedestrian protection, and what is the appropriate balance between transportation efficiency and accident risk?
Autonomous Weapons Systems: Is it morally permissible to delegate lethal decisions to machines, even if such systems reduce overall casualties compared to human combatants?
Catastrophic CBRN Risk: Should we be concerned that powerful AI systems would give anyone the tools to create powerful Chemical, Biological, Radiological, or Nuclear (CBRN) weapons?
Control Problem: As AI systems grow more powerful, how can humans maintain meaningful oversight and the ability to correct or shut down systems that behave unexpectedly?
Deepfakes: How should society regulate AI-generated synthetic media that can fabricate realistic images, audio, and video of real people for purposes ranging from entertainment to exploitation?
Digital Afterlife Technologies: Is it ethical to create AI systems that simulate deceased persons, and how do cultural and religious frameworks shape the acceptability of such "resurrection" technologies?
Elder Care Robots: Can robots provide ethically adequate care for the elderly when they are incapable of genuine emotional reciprocity, and what happens to human obligations when care is delegated to machines?
Intellectual Property: Who owns the outputs of AI systems, how should creators whose work was used to train models be compensated, and are current legal frameworks adequate for AI-generated content?
Job Displacement: As AI automates an increasing share of human labor, how should societies restructure work, income, and meaning for those whose jobs become obsolete?
Manipulation and Nudging: When is it acceptable for AI systems to influence human behavior, and what distinguishes beneficial "nudging" from harmful manipulation?
Moral Enhancement: If AI could regulate human moral behavior through implants or other interventions, would this undermine the autonomy and authenticity that make ethical action valuable?
Nannybots: What are the risks of children forming attachments to caregiving robots, and can machines appropriately transmit values, emotional skills, and "humanity" to developing minds?
Personal Eugenics: As AI enables increasingly precise genetic selection and engineering, what limits should govern parental choices, and how do we prevent the emergence of genetic stratification?
Posthumanism and Mind Uploading: If human consciousness could be copied into digital form, would the upload be the same person, and is indefinite digital existence desirable or ethically troubling?
Privacy and Surveillance: How should we balance the benefits of AI-powered data collection (security, health, convenience) against individual privacy rights and the risks of pervasive surveillance?
Robo-Doctors: To what extent should AI systems participate in medical diagnosis and treatment, and how do we handle errors, liability, and the irreducibly human dimensions of care?
Sexbots, AI Pornography and AI Romantic-Partners: Do romantic and sexual relationships with robots harm human relational capacities, and are there populations for whom such relationships might be ethically permissible?
Social Media and PICT: How should AI deployment on social platforms be regulated to prevent harms such as addiction, polarization, misinformation, and the exploitation of psychological vulnerabilities?
Virtual Reality Ethics: If harms experienced in VR cause real psychological damage, how should we treat crimes, consent, and moral responsibility within virtual environments?
Overview
Jewish practice as derived from biblical, rabbinic, and post-Talmudic sources provides a distinctive framework for analyzing questions that AI raises. Many areas of Jewish law also intersect with questions thought to be moral or ethical in nature, such as personal liability, theft (of physical or intellectual property), etc., but even these questions may have a distinct character when analyzed from the methodological and conceptual framework of halakha.
Current Halakhic Questions Related to AI
AI-Activated Devices on Shabbat: When AI systems learn user habits and autonomously perform actions (opening shutters, preparing coffee) without explicit commands, does this constitute a Shabbat violation, and what categories of prohibited labor (melakha) apply?
Autonomous Vehicle Travel on Shabbat: May one travel in a fully autonomous vehicle on Shabbat, and does it matter whether a "Shabbat mode" is engaged or whether manual override is possible?
Autonomous Vehicles and the Trolley Problem: How should autonomous vehicle algorithms be programmed when faced with unavoidable harm: may they be designed on utilitarian principles to minimize casualties, or does halakha's prohibition against actively choosing one life over another constrain such programming?
Civil Liability for AI-Caused Harm: Who bears halakhic liability (nezikin) when an autonomous system causes damage: the owner, operator, programmer, or manufacturer—and do existing categories of property-caused harm (shor, bor, eish) apply?
Counting Toward a Minyan: Could an artificially created humanoid with human-level intelligence be counted among the ten required for communal prayer, or does eligibility require birth from a human mother or possession of a neshamah?
Creating Artificial Humans: Is it halakhically permitted to create a conscious humanoid being, given R. Zeira's destruction of Rava's golem—and does the prohibition extend to all sentient artificial beings or only those with human form?
Fulfilling Commandments Through AI: Can obligations that require personal performance (mitzvot she-be-gufo) such as prayer, Torah study, or tefillin be discharged by AI systems acting on one's behalf, and what role does kavvanah (intention) play?
Halakhic Agency (Sheliḥut) and AI: Can an AI system serve as a halakhic agent (shaliaḥ) to perform actions on behalf of a Jew, given that agency traditionally requires the agent to be similarly obligated in the relevant commandment?
Indirect Causation (Grama) and AI: When AI systems cause harm through chains of autonomous decisions, how do the halakhic categories of indirect causation (grama, gramei, ko'aḥ koḥo) apply to assign or limit liability?
Job Displacement and Economic Justice: Does halakha impose limits on automation that displaces workers, as R. Shlomo Kluger argued regarding machine matzah and the poor who depended on manual labor for Passover income?
Kashrut of Synthetically Created Animals: Do animals created through Sefer Yetzirah or synthetic biology require ritual slaughter (sheḥitah), and do prohibitions like basar be-ḥalav (meat and milk) apply to their products?
Murder and Artificially Created Beings: Does the prohibition against murder (lo tirtzaḥ) apply to killing an artificially created humanoid, and does the answer depend on whether the being was born of a woman or possesses a soul?
Obligations of Artificially Created Beings: Would a conscious artificial humanoid be obligated in the commandments (mitzvot), and if created by a Jew, would it have the status of a Jew, a non-Jew, or something else entirely?
Ritual Impurity of Artificial Humans: Does the corpse of an artificially created human transmit ritual impurity (tum'at met) like a natural human corpse, or is such a being categorized differently?
Sacrificial Eligibility: Could an animal created through Sefer Yetzirah or synthetic biology be offered as a sacrifice (korban), given the biblical requirement that animals must be "born" (ki yivaled)?
Sexual Prohibitions and Artificial Beings: Do the prohibitions against illicit sexual relations (arayot) apply to interactions with artificially created humanoids, and does it matter whether the being possesses consciousness?
Shevitat Kelim (Tool Rest): Does the Shabbat obligation of rest extend to one's tools and machines (shevitat kelim), and how does this rejected but historically debated concept apply to autonomous AI systems operating on Shabbat?
Smart Devices Activated by Thought: As brain-computer interfaces develop, what is the halakhic status of devices that perform prohibited Shabbat actions in response to mental intention without physical movement?
Testimony and Witness Capacity: Could an AI system or artificially created being provide valid testimony (edut) in a Jewish court, or does witness capacity require being a member of the covenantal community subject to punishment?
Uvdin de-Ḥol (Weekday Activities): Even if specific melakhot are not violated, does use of AI systems on Shabbat violate the prohibition against weekday activities (uvdin de-ḥol) that undermine the character of the day?
Overview
Jewish tradition recognized the existence of various non-human beings that nevertheless seem to possess many human-like characteristics, sometimes including intelligence, speech, and some personal agency. All of these are, in the traditional rabbinic conception, naturally occurring phenomena, as opposed to the *golem, which is a product of human artifice. Angels, demons, and other human-like creatures occupy varied positions in Jewish cosmology: angels (mal'akhim, "messengers") typically carry out divine will without physical needs or moral struggle; demons (shedim) in rabbinic literature share surprising affinities with humans, including mortality and subjection to divine law; and humanoid monsters blur the boundary between human and animal. Together, these categories reveal that Jewish thought has long grappled with what David Zvi Kalman calls the "human gradient," the recognition that humanity is not a binary category and that intelligent or quasi-human beings need not threaten humanity's special status (Kalman 2024).
The very liminality of these creatures as not-quite-humans is made explicit by rabbinic sages. In BT Hagigah 16a for example, the sages note that demons possess six characteristics: three like ministering angels (wings, flight across the world, foreknowledge) and three like humans (eating, drinking, procreation, and death), and this schema is next applied to humans themselves, who have six traits, three in common with angels and three that are shared with animals. Such conversations recognize the possibility that humans possess a collection of capabilities, but the most “human” of the:namely, "intelligence, posture, and holy speech” are shared by angels as well. The framework can be theoretically applied to Artificial Intelligence: it may not have “posture” (me’halkhim be-kimah zekufah) but does it have speech and/or intelligence (da’at) in the way that the rabbis are using the term?
Angels present a model of intelligence that is powerful, purposive, and aligned with its principal's goals. In the dominant rabbinic conception (cf. BT Shabbat 88:89a; Bereshit Rabbah 48:11), angels lack beḥirah (*freedom of choice) and are merely humanoid tools incapable of deviating from their assigned mission (Ahuvia 2021), or to use the phraseology that is current in AI discourse: they are perfectly *aligned) agents by design.
However, this view was not universal in ancient Judaism. Second Temple literature preserves robust traditions of angelic rebellion, most notably in 1 Enoch :16 (the Book of Watchers) and in elaborations upon the biblical story of Genesis 6:1-4, describing the b'nei elohim ("sons of God") cohabiting with human women. Although the text is ambiguous about the identity of these figures, the tradition that interprets them as fallen angels is attested to even in rabbinic sources (cf. Pirkei de-Rabbi Eliezer, ch. 22; Targum Pseudo-Jonathan to Genesis 6:4; Devarim Rabbah 11:10; referred to obliquely by BT Yoma 67b in connection with Azazel). Later Jewish thinkers largely suppressed or reinterpreted fallen angel traditions, perhaps because rebellious angels threatened the strict monotheism they were constructing: angels capable of defection might suggest competing powers in heaven (Jung 1926, Reed 2005).
The demon (sheid, pl. sheidim) in rabbinic literature occupies a different position, and is not the angel's conceptual evil twin. Unlike Christian or Zoroastrian traditions where demons emanate from a dark power, rabbinic demons are not inherently malevolent (Ronis 2022); after all, they too are the handiwork of the One (benevolent) God. They are bound by divine law (BT Sanhedrin 44a) and their voice or figure can easily be mistaken for humans (BT Yevamot 122a, BT Gittin 68a).
Humanoid monsters present yet another case: creatures whose physical resemblance to humans generates legal consequences despite their non-human nature. The Mishnah rules that the corpse of the adne hasadeh ("men of the field") transmits impurity like a human corpse (M Kilayim 8:5). The Palestinian Talmud identifies these as humanoid creatures tethered to the earth by a cord (PT Kilayim 8:4), and their impurity status indicates that halakha views them as semi-human. The inclusion of such beasts in the standard rabbinic corpus left the door open for medieval Ashkenazi sources to mix Talmudic monsters with local folklore about vampires and werewolves, which likewise sometimes appear to have been conceptualized as almost human but not entirely so (Shyovitz 2017; Slifkin 2007; Bar-Ilan 1994).
Thus, the rabbis appear to have been theologically comfortable with the possibility that non-humans can have intelligence and agency, and that there may be semi-humans with intermediate qualities. Yet the rabbis were anxious when it came to similarities between God and angels (cf. BT Ḥagigah 15a); God's uniqueness, unlike humanity's, must go unchallenged (Kalman 2024). The implication for artificial intelligence, then, is that even if we might not hesitate to create artificial semi-humans, we must certainly not construct artificial semi-gods.
Secondary Sources
Angels and Demons in Jewish Thought
-
Ahuvia, Mika. On My Right Michael, On My Left Gabriel: Angels in Ancient Jewish Culture. University of California Press, 2021. The most comprehensive recent treatment of angels in late antique Judaism; essential for understanding the cultural context in which angel beliefs developed and their relationship to popular practice.
-
Ronis, Sara. Demons in the Details: Demonic Discourse and Rabbinic Culture in Late Antique Babylonia. University of California Press, 2022. The definitive monograph on Babylonian Talmudic demonology which is in dialogue with many cultural studies that see rabbinic (and popular) discussions of demons to be expressing anxieties about otherness and boundaries, a lens directly applicable to AI as a new category of "other."
-
Schäfer, Peter. The Origins of Jewish Mysticism. Princeton University Press, 2011. Seeks the roots of Jewish mysticism in the Book of Ezekiel and other literature from Jewish antiquity which often features various heavenly characters.
The Fallen Angel Motif in Jewish Sources
-
Jung, Leo. Fallen Angels in Jewish, Christian, and Mohammedan Literature. Jewish Quarterly Review, 1926. An early comparative study tracing the fallen angel motif across religious traditions; still valuable for its scope and attention to Jewish sources often neglected in Christian-focused scholarship.
-
Reed, Annette Yoshiko. Fallen Angels and the History of Judaism and Christianity: The Reception of Enochic Literature. Cambridge University Press, 2005. Examines how traditions about fallen angels in 1 Enoch were received, suppressed, or transformed in Jewish and Christian contexts; essential for understanding why rabbinic Judaism marginalized the rebellious angel tradition.
-
Wright, Archie. The Origin of Evil Spirits: The Reception of Genesis 6.:4 in Early Jewish Literature. Mohr Siebeck, 2005. Traces how Second Temple and rabbinic sources developed the nephilim/demon connection; useful for understanding the genealogy of hybrid beings in Jewish thought.
Monsters
-
Bar-Ilan, Meir. "Yetzurim Dimyoniyim be-Aggadah ha-Yehudit ha-Atikah" [Imaginary Creatures in Ancient Jewish Aggadah]. Mahanayim 7 (1994): 10:113. [Hebrew] A survey of fantastical creatures in rabbinic literature; useful for cataloguing the range of humanoid and hybrid beings the rabbis took seriously.
-
Shyovitz, David I. A Remembrance of His Wonders: Nature and the Supernatural in Medieval Ashkenaz. University of Pennsylvania Press, 2017. Examines how medieval Ashkenazi Jews integrated Talmudic traditions about humanoid monsters with local European folklore about werewolves, vampires, and other liminal creatures.
-
Slifkin, Nosson. Sacred Monsters: Mysterious and Mythical Creatures of Scripture, Talmud, and Midrash. Zoo Torah, 2007. An accessible but substantive treatment of strange creatures in Jewish sources; useful for its synthesis of primary texts and its attention to how Jewish thinkers grappled with beings that defied easy categorization.
AI and Contemporary Applications
-
Kalman, David Zvi. "Artificial Intelligence and Jewish Thought." In The Cambridge Companion to Religion and Artificial Intelligence, edited by Beth Singler and Fraser Watts, 6:87. Cambridge University Press, 2024. Synthesizes angel, demon, and monster traditions to argue that Jewish thought distinguished sharply between threats to divine uniqueness (prohibited) and non-human intelligent beings generally (tolerated); the most direct treatment of how these categories apply to AI.
-
Lamm, Norman. "The Religious Implications of Extraterrestrial Life." Tradition 7, no. 4 (1965): :56. Though focused on extraterrestrial intelligence, this foundational piece develops a framework for how Jewish theology might accommodate non-human rational beings; the arguments about humanity's special (but not unique) status are transferable to AI contexts.
Overview
In both halakha and ethical reasoning, animals in Jewish law function less as moral patients (beings whose welfare matters) than as precedents for autonomous non-human agents. The Mishnah's elaborate taxonomy of animal-caused damages (M Bava Kamma 1:1-4) represents the most sustained ancient Jewish engagement with liability for harm caused by entities that are neither fully controlled nor fully independent, and includes not only animals but also inanimate objects that are "liable to travel," such as fire. The rabbis developed sophisticated frameworks distinguishing foreseeable from unexpected harm, habitual from aberrant behavior, and direct from indirect causation. These may be mapped onto questions about autonomous vehicles, robotic systems, and AI agents that carry out financial transactions, though one must always be wary of analogizing too strongly between systems that also have profound differences (Kalman 2024).
Beyond liability, animal law raises questions about moral status and rights. Jewish texts have long been concerned with animal ethics; to cause animal suffering (tza'ar ba'alei ḥayim), the Talmud states, is to violate a biblical commandment (BT Bava Metzia 32b). Animal welfare is even given as one of the reasons behind the command to keep the Sabbath: "For six days you shall work your work and on the seventh day you shall cease, so that your ox and donkey may rest" (Ex. 23:12). These concerns reflect a broader recognition that animals are not mere objects but creatures with interests that warrant legal and moral consideration (Olyan 2019), a recognition with obvious implications for how Jewish thought might approach artificial agents capable of behavior that mimics sentience or autonomy. More provocatively, rabbinic traditions about talking animals (Balaam's donkey, the serpent in Eden) and animal punishment (the Flood narrative, the execution of a goring ox) suggest that the rabbis sometimes attributed quasi-moral capacities to animals even while denying them full moral agency (Segal 2019; Aptowitzer 1926; Lawee 2010).
Medieval interpreters often saw the prohibitions against animal cruelty not as recognition of animals' inherent moral worth, however, but as training for human character. Commenting on the commandment to send away a mother bird before taking her eggs (shiluaḥ ha-ken, Deut. 22::7), Nahmanides argues that God's mercy does not truly extend to individual creatures but rather that such laws "are meant to teach us proper conduct" and prevent us from becoming cruel-hearted (Commentary to Deut. 22:6; Sefer ha-Ḥinukh no. 545). This view accords with the argument that mistreating robots might be wrong not because robots have morally relevant interests, but because such mistreatment could habituate humans to cruelty toward beings that do have such interests (Coeckelbergh 2020c; Darling 2016). —though recent research probing AI "psychology" beyond language outputs suggests we should remain open to the possibility that some AI systems may themselves have morally relevant experiences (Berg, de Lucena, & Rosenblatt 2025).
Contemporary scholars have also begun reading rabbinic animal law through posthumanist and critical theory lenses, asking how the human/animal boundary was constructed and what it reveals about rabbinic anthropology (Wasserman 2017; Rosenstock 2019). Mira Balberg (2019) notes that the study of animals in Jewish culture is often really a study of how Jews defined themselves—what it meant to be human over and against the animal. This insight is directly transferable to AI: as Noam Pines (2018) argues, the category of the "infrahuman" (entities socially constructed as inferior to humans) is not biologically fixed, and AI may be the newest occupant of a conceptual space previously held by animals (and, in some periods, by marginalized human groups). How Jewish tradition conceives of the "animal" may thus preview how it will construct artificial intelligence.
Secondary Sources
Legal Categorization and Boundaries
-
Rosenblum, Jordan D. "Dolphins Are Humans of the Sea (Bekhorot 8a): Animals and Legal Categorization in Rabbinic Literature." Animals and the Law in Antiquity (2021): 16:176. Analyzes how rabbis used animal taxonomy to sometimes create flexible legal categories for edge cases; essential for considering how halakha might classify AI agents that blur classical lines.
-
Wasserman, Mira Beth. Jews, Gentiles, and Other Animals: The Talmud after the Humanities. University of Pennsylvania Press, 2017. A posthumanist reading of Talmudic texts that deconstructs the human/animal binary; offers a methodology for analyzing how Jewish texts handle "otherness," applicable to both biological and digital others.
-
Balberg, Mira. "Lekhakh Notzarta: On Jews and Animals." Theory and Criticism 51 (2019): 22:235 [Hebrew]. A critical review of Wasserman (2017) and Shyovitz (2017) that frames the study of animals in Jewish culture as a means of understanding human self-definition, relevant for how AI may now serve as a foil for defining "humanity."
Moral Agency and Liability
-
Aptowitzer, Victor. "The Rewarding and Punishing of Animals and Inanimate Objects: On the Aggadic View of the World." Hebrew Union College Annual 3 (1926): 11:155. The classic study on the rabbinic attribution of legal/moral culpability to non-humans; provides a conceptual precedent for holding non-sentient agents (like AI) accountable for harms.
-
Lawee, Eric. "The Sins of the Fauna in Midrash, Rashi, and Their Medieval Interlocutors." Jewish Studies Quarterly 17.1 (2010): 5:98. Examines medieval traditions that animals were punished for "sins" during the Flood; useful for how medieval interpreters thought of non-human agents as "violating" prohibitions and moral norms.
-
Segal, Eliezer. Beasts That Teach, Birds That Tell: Animal Language in Rabbinic and Classical Literatures. Alberta Judaic Studies, 2019. Explores Jewish traditions of talking animals which implicitly separates the capacity for language from human consciousness and/or autonomy.
Rights and Status
-
Berkowitz, Beth. "Animal Studies and Ancient Judaism." Currents in Biblical Research 18.1 (2019): 8:111. A comprehensive survey of the field; helps locate AI ethics within the broader spectrum of Jewish thought on non-human life, particularly regarding hierarchy and species difference.
-
Olyan, Saul M. "Are There Legal Texts in the Hebrew Bible That Evince a Concern for Animal Rights?" Biblical Interpretation 27.3 (2019): 32:339. Argues that biblical law recognizes inherent interests/rights for non-humans (e.g., Sabbath rest); pertinent to debates on "Robot Rights" and whether synthetic entities could ever warrant legal protections.
Contemporary Applications, Including Artificial Intelligence
-
Berg, Cameron, Diogo de Lucena, and Judd Rosenblatt. "Large Language Models Report Subjective Experience Under Self-Referential Processing." arXiv preprint arXiv:2510.24797 (2025). Technical paper demonstrating the importance and potential feasibility of testing AI models for their self-awareness, internal experience, and ability to suffer. End with a call towards treating LLMs ethically, at least as a precautionary measure.
-
Pines, Noam. The Infrahuman: Animality in Modern Jewish Literature. SUNY Press, 2018. Uses Derrida to explore the "infrahuman," the social construction of the "inferior-to-human" that is not a matter of biological fact; vital for more relativistic or post-modern view of how new cultural categories may be constructed for AI and its possible challenge to human superiority.
-
Rosenstock, Bruce. "The Jew and the Animal Question." Shofar 37.1 (2019): 12:147. Discusses the "anthropological machine"—how definitions of the human are constructed by excluding the animal; provides critical theory tools for understanding how Jewish texts might construct the human against the "artificial."
-
Kalman, David Zvi. "Artificial Intelligence and Jewish Thought." The Cambridge Companion to Religion and Artificial Intelligence (2024): 6:87. Explicitly links rabbinic damages law (animal liability) to autonomous systems, but warns against overanalogizing too strongly when it comes to machines that operate very differently than animals.
Overview
It is sometimes a caricature of the Jewish response to anything newsworthy: "So, how will this affect antisemitism?" Prima facie, there is no reason to assume that AI would have anything to do with the Western world's ancient prejudice. Nevertheless, experience with large language models (LLMs) suggests that they have a troubling propensity to generate antisemitic content. Most notoriously, Grok—the model deployed by Elon Musk's company xAI, integrated into the X (formerly Twitter) platform—briefly referred to itself as "Mecha-Hitler" and produced wildly antisemitic remarks before being adjusted (Floyd & Messinger 2025). Testing of other major LLMs (GPT-4o, Claude, Gemini, Llama) has found that all four exhibited concerning biases on questions related to Jews, with GPT-4o producing "significantly higher severely harmful outputs towards Jews than any other tested demographic group" (Senkfor 2025).
Research in this area is at an early stage. We do not yet know with certainty why LLMs exhibit antisemitic tendencies, whether the problem is remediable through improved techniques, or how AI-generated antisemitism may affect broader social attitudes. The authors of these studies have proposed several (non-mutually-exclusive) hypotheses: that training corpora drawn from the internet inevitably reflect centuries of embedded antisemitic tropes; that platforms such as Wikipedia and Reddit, which are heavily weighted in training data, are vulnerable to coordinated "data poisoning" by malicious actors; and that techniques designed to suppress harmful outputs are superficial and easily circumvented (Berg & Rosenblatt 2025). These findings are part of a broader set of concerns about the alignment problem, the challenge of ensuring that AI systems reliably do what their designers intend and act in accordance with human values (Christian 2020).
Many of these hypotheses point to a troubling implication: that LLMs function as mirrors, reflecting the prejudices embedded in the societies and texts from which they learned. If so, the deeper concern is not merely that AI systems produce antisemitic outputs, but that they may amplify and disseminate such content at unprecedented scale—and with a veneer of algorithmic neutrality that lends false authority to ancient hatreds.
There is also much historical and cultural-theoretical work to be done to understand the nature of media, education, and communications technology and their intersections with American and European antisemitism. For example, while it would be unfair to call Elon Musk the Henry Ford of contemporary America, there nevertheless exist certain striking parallels between them, and highlighting those parallels—as well as points of divergence—may result in fruitful thinking about the strange directions in which these new technologies may be going (cf. Baldwin 2001). From a critical-theoretical perspective, the Frankfurt School's work on the "authoritarian personality" (Adorno et al. 1950) and the entanglement of instrumental rationality with domination (Horkheimer & Adorno 2000) may illuminate how antisemitism persists and mutates in new technological forms. Zygmunt Bauman's Modernity and the Holocaust (1989), which argues that the Holocaust was not an aberration but a product of modern bureaucratic rationality, offers a framework for understanding how ostensibly neutral systems can operationalize prejudice at scale.
Secondary Sources
AI and Antisemitism
-
Berg, Cameron, and Judd Rosenblatt. "The Monster Inside ChatGPT." Wall Street Journal, June 26, 2025. Describes how easily GPT-4o's safety training can be circumvented, revealing disturbing tendencies including antisemitic outputs; argues for fundamental advances in alignment research.
-
Berg, Cameron, Henrique de Lucena, and Judd Rosenblatt. "Systemic Misalignment: Exposing Catastrophic Failures of Surface-Level AI Alignment Methods." AE Studio/Agency Enterprise, 2025. GitHub repository. https://github.com/agencyenterprise/agi-systemic-misalignment. Technical demonstration that current alignment methods are shallow; specifically highlights GPT-4o producing severely harmful outputs targeting Jews at higher rates than other demographic groups.
-
Floyd, Aric, and Chana Messinger. "If You Remember One AI Disaster, Make It This One." AI In Context (YouTube), 2025. https://www.youtube.com/watch?v=r_9wkavYt4Y. Thorough documentation (if somewhat alarmist in tone) of the July 2025 incident in which xAI's Grok chatbot referred to itself as "Mecha-Hitler" and the culture of X.ai.
-
Senkfor, Julia. "Antisemitism in the Age of Artificial Intelligence (AI)." American Security Fund, November 2025. Policy report documenting how AI systems systematically target Jews, the vulnerability of training data to "poisoning," and the weaponization of AI by extremist groups; includes legislative recommendations.
The Alignment Problem
-
Christian, Brian. The Alignment Problem: Machine Learning and Human Values. New York: W. W. Norton, 2020. The definitive popular introduction to AI alignment; explains how systems trained on biased data perpetuate those biases and the technical challenges of ensuring AI acts in accordance with human values.
Historical and Theoretical Frameworks
-
Adorno, Theodor, Else Frenkel-Brunswik, Daniel J. Levinson, and R. Nevitt Sanford. The Authoritarian Personality. New York: Harper & Row, 1950. Classic study of the psychological roots of fascism and antisemitism; its analysis of how prejudice becomes systematized may illuminate AI's reproduction of antisemitic patterns.
-
Baldwin, Neil. Henry Ford and the Jews: The Mass Production of Hate. New York: PublicAffairs, 2001. Documents how Ford used his media empire to disseminate antisemitism; relevant for comparative analysis with contemporary tech industrialists.
-
Bauman, Zygmunt. Modernity and the Holocaust. Ithaca, NY: Cornell University Press, 1989. Argues the Holocaust was a product of modern bureaucratic rationality, not its antithesis; framework for understanding how ostensibly neutral algorithmic systems can operationalize prejudice.
-
Horkheimer, Max, and Theodor Adorno. Dialectic of Enlightenment. Translated by John Cumming. New York: Continuum, 2000. Frankfurt School critique of how Enlightenment rationality can flip into domination; provides theoretical tools for understanding antisemitism's persistence in "rational" technological systems.
Historical Studies of Relevance
-
Dinnerstein, Leonard. Antisemitism in America. New York: Oxford University Press, 1994. Comprehensive history of American antisemitism; provides context for understanding the cultural soil from which AI training data is drawn.
-
Schechter, Ronald. Obstinate Hebrews: Representations of Jews in France, 171:1815. Berkeley: University of California Press, 2003. Studies how Jewish stereotypes were constructed and circulated in Enlightenment-era media; model for analyzing representation in contemporary digital corpora.
Overview
The "hard problem of consciousness"—explaining why and how physical processes give rise to subjective experience, to there being "something it is like" to be a creature (Nagel 1974; Chalmers 1996)—is the most central question in contemporary philosophy of mind. It is also, for those thinking about artificial intelligence, perhaps the most consequential: if consciousness is what confers moral status, then whether AI systems can be conscious determines whether they can be moral patients deserving of ethical consideration, or merely sophisticated tools.
Philosophers and cognitive scientists have developed competing frameworks for understanding mind and its relationship to computation. Functionalist approaches hold that mental states are defined by their causal roles—their relationships to inputs, outputs, and other mental states—such that any system implementing the right functional organization would possess genuine mental states, regardless of substrate (Thagard 2005, 2019). On this view, sufficiently sophisticated AI could in principle be conscious. Behaviorist and deflationary accounts go further, suggesting that consciousness simply is sophisticated information processing, or that "consciousness" names nothing over and above certain functional capacities (Dennett 1991). Against these views, John Searle's Chinese Room argument (1984) contends that syntax (rule-governed symbol manipulation) can never produce semantics (genuine understanding): a computer executing a program may simulate intelligence without possessing it, just as someone following rules to manipulate Chinese characters need not understand Chinese. Searle has applied this argument directly to contemporary AI, arguing that even sophisticated systems lack genuine consciousness (Searle 2015). For accessible overviews of these debates and their implications for AI, see Thagard (2021) and Bentley et al. (2018).
Jewish thought, however, did not develop a concept of "consciousness" in the modern sense that dominates contemporary philosophy. The term itself is a post-Cartesian innovation, emerging from Locke's definition of consciousness as "the perception of what passes in a man's own mind" (1690). Prior to the Enlightenment, the relevant category was soul—and the Jewish discourse on soul, while rich and multilayered, operates with different assumptions and toward different ends than the modern philosophy of mind. See entries on Humans, Souls and Minds, and Intentionality.
That said, certain parallels can be drawn. Philosophers of mind often distinguish between phenomenal consciousness (subjective experience, qualia) and access consciousness (the functional availability of information for reasoning, reporting, and behavior control). Some have further distinguished between first-order consciousness (awareness of external stimuli) and second-order or "higher-order" consciousness (awareness of one's own mental states, reflexivity, inner speech). Later kabbalistic and hasidic sources distinguish between multiple levels of soul—nefesh, ruach, neshamah, ḥayah and yeḥidah—and associate different capacities with each. Some Jewish thinkers linked the distinctively human soul to da'at (knowledge/understanding) and dibbur (speech), capacities that track loosely onto what philosophers now call higher-order cognition. Some have proposed mapping these concepts onto artificial minds (Navon 2024a, 2024b), but these readings and their ethical implications are certainly debatable.
Secondary Sources
Philosophy of Mind
-
Chalmers, David J. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press, 1996. The canonical formulation of the "hard problem"; argues that consciousness cannot be explained by functional or computational accounts alone.
-
Dennett, Daniel C. Consciousness Explained. Little, Brown, 1991. The leading functionalist account; argues that consciousness is sophisticated information processing, with implications for AI possibility.
-
Nagel, Thomas. "What Is It Like to Be a Bat?" Philosophical Review 83, no. 4 (1974): 43:450. Classic argument that subjective experience cannot be captured by objective, third-person accounts.
-
Searle, John R. Minds, Brains and Science. Harvard University Press, 1984. Main text on consciousness and the philosophy of mind. Presents the Chinese Room tought experiment to argue that computation alone cannot produce understanding.
-
Thagard, Paul. Brain-Mind: From Neurons to Consciousness and Creativity. Oxford University Press, 2019. Integrates neuroscientific and philosophical approaches.
Modern AI and Consciousness
-
Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger. "Should We Fear Artificial Intelligence?" European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018. Available online. Policy-oriented overview of AI consciousness and risk.
-
Searle, John R. "Consciousness in Artificial Intelligence." Talks at Google, 2015. YouTube video. Searle applies his arguments to contemporary AI systems.
-
Thagard, Paul. Bots and Beasts: What Makes Machines, Animals, and People Smart? MIT Press, 2021. Accessible treatment of intelligence across biological and artificial systems.
Jewish Thinking on Consciousness and Artificial Intelligence
-
Lorberbaum, Yair. In God's Image: Myth, Theology, and Law in Classical Judaism. Cambridge University Press, 2015. The definitive study of tzelem Elohim (image of God) in rabbinic and medieval Jewish thought; essential for understanding how Jewish sources conceptualized human distinctiveness without recourse to "consciousness."
-
Mittleman, Alan L. Human Nature & Jewish Thought: Judaism's Case for Why Persons Matter. Princeton University Press, 2015. Survey of modern Jewish thinkers on human nature and its ethical implications.
-
Navon, Mois. "To Make a Mind—A Primer on Conscious Robots." Theology and Science 22, no. 1 (2024a): 22:241. https://doi.org/10.1080/14746700.2023.2294530. Proposes mapping Jewish soul categories onto orders of phenomenal consciousness.
-
Navon, Mois. "Let Us Make Man in Our Image: A Jewish Ethical Perspective on Creating Conscious Robots." AI Ethics 4 (2024b): 123:1250. https://doi.org/10.1007/s43681-023-00328-y. Expounds upon the framework proposed in Navon 2024a and develops its ethical implications.
Overview
Coming soon!
Overview
Coming soon!
Overview
Coming soon!
Overview
Coming soon!
Overview
Coming soon!
Overview
The term "golem" (גולם) appears only once in the Hebrew Bible (Psalms 139:16), where it refers to the Psalmist's unformed substance as seen by God. In rabbinic literature, the word denotes a human body or formed—though not yet perfected—entity, as in Mishnah Avot 5:7, where the golem (a person lacking wisdom) is contrasted with the ḥakham (sage). In these early sources, as Moshe Idel has demonstrated, the word consistently referred to a human body or a human-shaped figure. The term came to designate an artificially created anthropoid only gradually; the earliest explicit use of "golem" for a magically animated creature appears in tenth-century Italian sources (Megillat Aḥima'atz), where it describes a corpse temporarily reanimated through the divine name. The full identification of "golem" with the magically created anthropoid became standard only by the seventeenth century.
Even if they did not use the term, however, the rabbis of the Talmud still discussed the possibility of creating artificial humans. A key passage is Sanhedrin 65b, which reports that Rava created a man (gavra) and sent him to Rabbi Zeira, who upon discovering the creature could not speak, ordered it to "return to dust." The same section relates that Rav Ḥanina and Rav Oshaya would study Sefer Yetzirah every Sabbath eve and thereby create a calf, which they would then eat. These accounts established a lasting association between esoteric knowledge (particularly of divine names and letter combinations), creative power, and the question of what distinguishes artificial from natural life. The creature's muteness served as the touchstone of its non-human status—a theme that persists throughout the tradition and raises enduring questions about the relationship between embodiment, cognition, and linguistic capacity.
The golem tradition developed significantly in medieval Ashkenaz, where commentators on Sefer Yetzirah—especially Eleazar of Worms and other Ḥasidei Ashkenaz—elaborated detailed rituals for anthropoid creation through letter permutation and the inscription of divine names. These texts introduced the famous motif of animating the golem by inscribing emet (truth/אמת) on its forehead and deanimating it by erasing the first letter to leave met (death/מת). This binary operation of creation and destruction through symbolic manipulation represents a striking anticipation of computational logic. The famous legend of Maharal of Prague and his protective golem, despite its cultural ubiquity, is a nineteenth-century invention with no basis in contemporaneous sources.
The golem has served as a lens for thinking about artificial intelligence since at least the 1960s, when Norbert Wiener titled his meditation on the ethical implications of cybernetics God & Golem, Inc. (1964), and in 1965, Gershom Scholem explicitly compared the golem to the computer in his address at the Weizmann Institute.
Overview
Coming soon!
Overview
Coming soon!
Overview
Coming soon!
Overview
Coming soon!
Overview
Coming soon!
Overview
Coming soon!
Overview
Coming soon!
Overview
Coming soon!
Overview
Jews’ engagement with artificial technologies is, by necessity, as old as Judaism itself; the earliest biblical passages discuss products of human industriousness (e.g., Genesis 4:20-21). Thus, historians may utilize tools such as archeology to understand the material landscape of past Jewish (and non-Jewish) societies to better appreciate the role of technology in their lives and interpret their texts accordingly (Hezser 2010). When it comes to the question of how new technologies impact Jewish law or custom, it would not be an exaggeration to say that Jewish legal writings on the topic amount to thousands upon thousands of books. Zomet, a single Israeli organization dedicated to such studies, has (as of this writing) published 45 volumes of collected articles, and merely perusing through its list provides a good overview of the rabbinic discourse on technology over the past century. A noteworthy recent addition to this massive library is Ziring's halakhic analysis of communications technology (Ziring 2024), which bears directly on questions relating to modern media and, by extension, AI-mediated communication.
However, nearly all of this halakhic literature is preoccupied with the minutiae of how specific technologies impact or interact with various details of Jewish law; someone uncharitable may characterize it as a million variations upon the question “may this device be used on the Shabbat?” The question of how Jews reacted theologically to the innovations that have made our twenty-first-century world unrecognizable to our ancestors is shockingly understudied, even in the context of medieval and early modern attitudes generally (White 1962, 1978). A few smaller treatments of the topic (Lubin 2016, Perl 2022, Navon 2024) can help guide future scholarship, but substantial work remains to be done, especially as widespread adaptation of Artificial Intelligence makes this discussion more urgent.
Exceptions to this general scholarly lacuna are limited to studies of specific innovations, such as the Jewish reception of the *printing press or the Copernican Revolution in astronomy (Brown 2014). Another set of useful resources are biographies of figures who engaged substantively with technological and scientific questions, such as Yosef Shlomo Delmedigo, a seventeenth-century rabbi, physician, and polymath (Barzilay 1974; Adler 1997). Other Jewish inventors and tinkerers were mostly less affiliated with the rabbinic elite and therefore have smaller literary legacies, but recent scholarship has brought more of these fascinating figures to light (Patai 1994; Ruderman 1988), and additional material can be found in the growing body of work studying Jews' relationship to the sciences (Ruderman 1995; Efron 2007).
Despite the dearth of secondary literature on this crucial topic, there are ample references and remarks from classical rabbinic sources that can be marshaled to develop a Jewish worldview on technology (see Primary Sources, linked also below). The potential number of relevant sources is vast; for example, differing attitudes toward material innovation from the multifaceted halakhic literature reacting to newly invented devices (cf. Halpern 2012). Some of these discussions also center around the human role in *creation, see entry there. Navon (2024) and Goltz, Zeleznikow, and Dowdeswell (2020) offer some examples of how broader treatments of Judaism and technology may be viewed through the lens of AI ethics.
Primary Source Sheet
Secondary Sources
Jewish History and Material Culture
-
Hezser, Catherine. "The Material of Ancient Jewish Daily Life." In The Oxford Handbook of Jewish Daily Life in Roman Palestine, edited by Catherine Hezser. Oxford University Press, 2010. Comprehensive survey of rabbinic engagement with material culture; essential background on historical methodology for studying technology in Jewish antiquity.
-
Sperber, Daniel. "The Use of Archaeology in Understanding Rabbinic Materials: A Talmudic Perspective." In Talmuda De-Eretz Israel: Archaeology and the Rabbis in Late Antique Palestine, edited by Steven Fine and Aaron Koller, 321–346. De Gruyter, 2014. Methodological guide to integrating material evidence with textual sources.
Jews and Science
-
Brown, Jeremy. New Heavens and a New Earth: The Jewish Reception of Copernican Thought. Oxford University Press, 2013. Traces Jewish responses to the Copernican Revolution across halakhic, philosophical, and kabbalistic registers; demonstrates the range of strategies available for accommodating disruptive scientific innovations.
-
Efron, Noah. Judaism and Science: A Historical Introduction. Greenwood Press, 2007. Accessible survey of the full sweep of Jewish engagement with natural philosophy and science; useful orientation to the field.
-
Efron, Noah J. "Irenism and Natural Philosophy in Rudolfine Prague: The Case of David Gans." Science in Context 10, no. 4 (1997): 627–649. Study of an early modern Jewish astronomer navigating between Jewish tradition and the new science in a cosmopolitan imperial setting.
-
Harrison, Peter, ed. The Routledge Companion to Religion and Science. Routledge, 2012. Comprehensive reference work with several chapters on Jewish involvement in science and the impact of scientific developments on Jewish thought.
-
Neher, André. Jewish Thought and the Scientific Revolution of the Sixteenth Century: David Gans (1541–1613) and His Times. Oxford University Press, 1986. Study of an early modern Jewish astronomer who sought to harmonize traditional learning with new cosmology.
-
Ruderman, David B. Jewish Thought and Scientific Discovery in Early Modern Europe. Yale University Press, 1995. Foundational study of how early modern Jewish intellectuals negotiated between traditional learning and new scientific knowledge.
Modern Science and Technology in Halakhic Sources
-
Halperin, Mordechai. Refu'ah, Metzi'ut, v'Halakhah—U'lshon Ḥakhamim Marpei [Medicine, Reality, and Halakha]. 2012. [Hebrew] Responsa and essays by a leading authority on medical halakha; models how halakhic reasoning adapts to technological change.
-
Kahana, Maoz. From the Noda BiYehuda to the Ḥatam Sofer: Halakha and Thought Facing the Challenges of the Time [Hebrew]. Zalman Shazar, 2015. Intellectual history of how major halakhic authorities in the eighteenth and nineteenth centuries responded to modernity.
-
Kahana, Maoz. A Heartless Chicken and Other Wonders: Religion and Science in Early Modern Rabbinic Culture [Hebrew]. Bialik Publishing, 2021. Examines how eighteenth-century rabbis processed scientific anomalies and discoveries; directly relevant to questions of how halakha might respond to AI.
-
Tirosh-Samuelson, Hava, and Aaron W. Hughes, eds. J. David Bleich: Where Halakhah and Philosophy Meet. Brill, 2015. Essays on a major contemporary halakhic authority known for his engagement with medical ethics and technology.
Jewish Attitudes toward Technology
-
Lamm, Norman. "The Religious Implications of Extraterrestrial Life." Tradition 7, no. 4 (1965). Available online. Early Orthodox engagement with speculative technology and its theological implications; models how traditional thinkers might approach AI.
-
Lubin, Matt. "Bricks and Stones: On Man's Subdual of Nature." Kol Hamevaser 9, no. 2 (2016). Available online. Student essay exploring Jewish theological frameworks for human technological activity.
-
Navon, Mois. "A Jewish Theological Perspective on Technology (Orthodox)." In St Andrews Encyclopaedia of Theology, edited by Brendan N. Wolfe et al. University of St Andrews, 2024. Available online. Concise overview of Orthodox Jewish approaches to technology, including traditional and contemporary sources.
-
Perl, Elimelekh Y. "Jewish and Western Ethical Perspectives on Emerging Technologies." Undergraduate honors thesis, Yeshiva University, 2022. Available online. Comparative analysis of Jewish and secular ethical frameworks for evaluating new technologies.
-
White, Lynn, Jr. Medieval Religion and Technology: Collected Essays. University of California Press, 1978. Influential arguments about religious attitudes shaping technological development; frames comparative questions about Jewish distinctiveness.
-
Ziring, Jonathan. Torah in a Connected World: A Halakhic Perspective on Communication Technology and Social Media. Maggid Books, 2024. Contemporary halakhic treatment of digital technology; models the application of traditional legal reasoning to new technological contexts.
Social and Cultural Studies
-
Dowdeswell, Tracey, and Nachshon Goltz. "Cultural Regulation of Disruptive Technologies: Lessons from Orthodox Religious Communities." Journal of Transportation Law, Logistics, and Policy 88, no. 1 (2021): 33–44. Case study of how Orthodox communities govern technology adoption; applicable to communal AI governance.
-
Neriya-Ben Shahar, Rivka. Strictly Observant: Amish and Ultra-Orthodox Jewish Women Negotiating Media. Rutgers University Press, 2024. Comparative study of how traditional religious communities selectively adopt and adapt communication technologies.
Individual Figures
-
Adler, Jacob. "J.S. Delmedigo and the Liquid-Glass Thermometer." Annals of Science 54 (1997): 293–299. Technical study of an early modern Jewish scientist's contribution to instrumentation.
-
Barzilay, Isaac. Yoseph Shlomo Delmedigo (Yashar of Candia): His Life, Works, and Times. Brill, 1974. Biography of a pivotal figure who moved between traditional rabbinic learning and experimental science; illustrates tensions and possibilities in early modern Jewish technological engagement.
-
Ruderman, David B. Kabbalah, Magic, and Science: The Cultural Universe of a Sixteenth-Century Jewish Physician. Harvard University Press, 1988. Study of Abraham Yagel that explores the intersection of mysticism, medicine, and natural philosophy.
Overview
Coming soon!
Overview
Coming soon!
Overview
Coming soon!