This article is an introductory piece for the book "Islands in the Cyberstream: Seeking Havens of Reason in a Programmed Society." The book is a dialogue between computer scientist and social critic Joseph Weizenbaum and Gunna Wendt.
The juxtaposition between the potential of technology and its actual performance can be quite discordant. Tools that promise to simplify tasks are used to automate people's work, devices that boast of their connectivity leave users feeling alienated, and machines that propel humanity into space are closely related to missile systems that could bring about human destruction. Joseph Weizenbaum eloquently captured this disconnection in his writings on the "contradictory role" of technology, noting, "Our ventures into science and technology have brought us to the brink of self-destruction... while also providing many of us with unprecedented comfort and even self-actualization. Some of us have begun to think that this is not a fair trade after all."
As a computer scientist and professor at the Massachusetts Institute of Technology (MIT), Joseph Weizenbaum earned his rightful place in computer history for his program ELIZA and his role in the development of the programming language SLIP. However, what set Weizenbaum apart from his peers and colleagues was not his success as a computer scientist, but his awareness of the impact of advancements in his field on the broader society. When he discovered that those around him were more interested in technology than in people, Weizenbaum embraced his role as an iconic, even heretical figure — pushing back against the ideological embrace of technology from its very roots.
For Weizenbaum, computers could not be separated from the social environment in which they existed. Thus, he rejected the title of computer critic, preferring to shape himself as a social critic. Like a magician revealing the secrets of magic, Weizenbaum warned computer users not to be deceived by mechanical wizardry, assuring them that they could understand how these machines operated in their lives. Weizenbaum insisted on a spirit that emphasized the need for computer programmers to take responsibility for their creations, positioning himself against those he derided as "compulsive programmers," "artificial intelligence," and those who refused to think about the meaning and application of their work.
From 1923 to 2008, Joseph Weizenbaum witnessed significant social, political, and technological transformations. These life experiences left an indelible mark on Weizenbaum's worldview — particularly as he was not a passive observer but an active participant in these changes, especially regarding technological advancements. Not only the computers themselves but also the internet and the range of goals for which computers were employed came under Weizenbaum's impassioned analysis. As a social critic and computer scientist, Weizenbaum's critiques have lost little of their power over time. This book showcases the richness of Weizenbaum's thought and his belief that issues involving computers are too important to be left solely to computer scientists.
After all, computers and technology still play a contradictory role in society.
From Berlin to Michigan to Massachusetts#
Joseph Weizenbaum was born on January 8, 1923, in Berlin. Although his father had an Orthodox Jewish background, Weizenbaum's upbringing was not particularly religious, even though he and his brother received religious education. While the Weizenbaum family managed to escape Germany before the worst of the Nazi atrocities — leaving for the United States in 1936 — the experiences of growing up amid the rise of fascism left an impression on Weizenbaum that would follow him throughout his life, no matter where he went.
Because his family left Germany when he was only 13 years old — in fact, they left on his 13th birthday — Weizenbaum only began to grasp the changes occurring in his birthplace. At the time the Weizenbaum family left Germany, the main groups being persecuted by the Nazis, at least from his perspective, were political opponents rather than those of Jewish descent. However, the rise of the Nazis and their anti-Semitic policies did have a direct impact on Weizenbaum's life, as Nazi laws forced him to leave the public school he had been attending and enter a Jewish boys' school. Berlin, and the world surrounding Weizenbaum, increasingly became an unsafe place. The police on the corner transformed from figures a child could turn to for help into people a Jewish child had to avoid. Berlin had become a home frequently visited by the stormtroopers, where terrible things occurred in their back rooms, and members of the Hitler Youth lurked on the streets, waiting to ambush Weizenbaum on his way home from school — yet these events were merely evidence to the young boy that "we were just living in a cruel society." Although Weizenbaum was not clear on the specific reasons for his family's departure at the time of immigration, he was aware that his family "had just escaped something evil." As his unease grew, Weizenbaum anxiously realized that many of his former friends and classmates had remained in Berlin, even as he first went to England and then crossed the Atlantic.
Upon arriving in the United States, Weizenbaum quickly became aware of the differences between himself and his peers. Immigration was not something he had prepared for — it was sudden and forced — and thus he arrived in America without speaking English. After being educated in German schools and Jewish schools in Germany, Weizenbaum found that he had to quickly overcome the knowledge gap between himself and his new peers. He not only had to learn how to live in a new country but also had to learn the history of that new country. However, for Weizenbaum, being different was a source of strength as he adapted to life in Detroit, Michigan. He was still struggling with English, but his interest in mathematics grew rapidly, as it was a subject he could understand: because mathematics is a universal language. It was Weizenbaum's love for mathematics that ultimately led him to computers.
After graduating from high school, Weizenbaum attended Wayne State University in Detroit to study mathematics, earning both a bachelor's and a master's degree — although his studies were interrupted by his service as a meteorologist in the Army Air Corps during World War II, he returned to his studies after the war ended. Weizenbaum's understanding of computers meant that he was genuinely involved in a field that was still in its infancy, as he had the opportunity to "assist in the construction of computers" while at Wayne State University. The personal computers of the twenty-first century bear little resemblance to the computers Weizenbaum helped build at Wayne State University. In fact, the computer he helped construct "filled an entire auditorium" and was nicknamed "Whirlwind," while the next computer was dubbed "Typhoon." After graduating from Wayne State University, Weizenbaum worked briefly in the private sector, helping Bank of America develop the Electronic Recording Machine-Accounting (ERMA) system while employed by General Electric. He left the corporate world in 1962 when MIT offered him a visiting professorship.
It was at MIT that Weizenbaum created ELIZA, where he gradually became more concerned with the impact of computers on society.
ELIZA#
While at MIT, Joseph Weizenbaum developed a computer program that secured his place in computer history — this program is ELIZA. The name ELIZA was chosen for this "language analysis program because, like the Eliza of Pygmalion fame, it could be taught to ‘speak’ increasingly well." This program allowed a person to communicate with a computer using natural, conversational language. This led to the computer's responses potentially giving the impression that the computer understood what was being said, and even that the computer was responding. A person "conversing" with ELIZA would input information in natural language using a typewriter connected to the computer running the program. After they entered their information, the computer would generate a response and display it on the same machine.
The early scripts for ELIZA operated in a manner similar to the techniques used by Rogerian psychotherapists — that is, ELIZA would typically respond to the user's input by rephrasing the words in the information as a question. In fact, for this incarnation of ELIZA — sometimes referred to as the doctor (DOCTOR) — human users were actually instructed to interact with the program as if they were genuinely speaking to a psychiatrist. The reason for this instruction was that it made ELIZA appear as if it were truly engaged in conversation, as "psychiatric interviews are one of the few examples of binary natural language communication in which one participant can freely adopt a pose of almost complete ignorance about the real world." The result of ELIZA is that it appears more conscious — this is a result of human conversationalists projecting their belief that they are being understood onto the computer program. When ELIZA responds with "Tell me more about your family," the question does not give the impression of ignorance about families; quite the opposite.
The ELIZA program was able to simulate participation in a real discussion by executing "transformation rules" that were applied when the program detected certain keywords in the text. If ELIZA received a message containing certain keywords, the program would break down the text string containing that keyword and recombine it in a way that prompted further responses. Following these rules, "any sentence in the form of 'I am BLAH' can be transformed into 'How long have you been BLAH,' regardless of the meaning of BLAH." Users input comments and statements containing keywords, and ELIZA would take these sentences and follow the rules of the script, such as replacing first-person pronouns with second-person pronouns and providing appropriate responses. Additionally, when ELIZA did not detect suitable keywords, the script was designed to provide "an early non-content remark, or an early transformation" as an answer under certain conditions.
In the first paragraph of Weizenbaum's article about ELIZA, he acknowledges that the computer seems to be performing magic; however, "once a particular program is unveiled, once its inner workings are explained in language simple enough to be understood, its magic disappears; it now appears to be nothing more than a collection of programs, each of which is easily understood." Following this quote, Weizenbaum clearly details how ELIZA works — step by step, he shows that it is not the result of magic, nor is it a true understanding of the parts of the program, but rather the result of clever programming. Of course, to make interlocutors believe they can understand ELIZA, ELIZA largely relies on things outside its script. In conversations with ELIZA, "as previously mentioned, the speaker will drape ELIZA's responses in a veneer of seeming understanding." Even when humans know they are merely exchanging information with a computer, and even when they know the scripts and programs that produce specific answers, the magic of ELIZA demonstrates "how easy it is to create and maintain the illusion of understanding."
Weizenbaum emphasizes that ELIZA does not actually understand the information it receives — even though the responses generated by its script give the impression that the opposite is true. Once the "conversation" begins, one of the main objectives of the ELIZA project is to keep the discussion going. It does this by masking "any of its own misunderstandings" and relying on the good faith of human discussants, so that when faced with evidence of hiccups indicating the program does not truly understand the information they input, it does not exit prematurely. This understanding can be difficult because people come from different backgrounds, meaning two individuals may have very different frames of reference. For Weizenbaum, the key is that humans "understand each other within acceptable limits," but programs like ELIZA can only "symbolically process these ideas." This is not a proof of understanding but merely evidence of successful script execution.
Even though Weizenbaum is skeptical about the extent to which two people can truly understand each other, as its creator, he is confident that he fully understands ELIZA — thus, he is quite surprised by the ways others seem to misunderstand ELIZA. As Weizenbaum wrote, "Those who are very clear that they are conversing with a machine quickly forget this fact," with some even "requesting to be allowed to converse privately with the system, and after talking to it for a while, regardless of how I explain it, they insist that the machine really understands them." Furthermore, the extent to which ELIZA successfully simulates the work of a Rogerian psychotherapist left many psychiatrists impressed, with some even suggesting that the program could be used with real patients. Weizenbaum's reaction to ELIZA left him puzzled, as he found himself disturbed by certain trends emerging elsewhere in the field of computer science, such as the tendency to describe humans as similar to computers and the assertion that the human brain is "merely a meat machine."
What is clear to Weizenbaum is that computers have not only become powerful tools in people's lives, but that "we have allowed technological metaphors... to penetrate our thought processes so thoroughly that we have ultimately delegated the responsibility for asking questions to technology." The rise of this technological metaphor is partly due to computer scientists evading responsibility for what they create — even though the proliferation of computers has allowed the technological metaphor to spread widely among the public, who do not fully understand how computers work.
After the success of ELIZA, Weizenbaum turned to addressing these challenges — which were more a feat of "social criticism" than "computer criticism." Weizenbaum gradually became a prominent critic of technological metaphors, and his position was one of the locations where this metaphor spread.
About "Computer Power and Human Reason"#
Joseph Weizenbaum's "Computer Power and Human Reason: From Judgment to Calculation" contains a wealth of content: an introductory lecture on the basic workings of computers, a popular presentation of mathematical principles in computer science, attempts to unveil the mysteries of computers, ethical challenges for those working in the field of computing, and a firm critique that just because computers can do something does not mean they should do it. Although the book covers mathematics and science, even warning unfamiliar readers that they may find these sections difficult, it is a book that does not seek to engage solely with an academic audience. In any case, "Computer Power" is not the first book to denounce the impact of technology on society. The historian and prominent technology critic Lewis Mumford clearly influenced Weizenbaum's thinking; however, a key factor that made Weizenbaum a technology critic is — unlike Mumford — Weizenbaum was actually a computer scientist.
At the beginning of "Computer Power," Weizenbaum explains his technical credentials and expresses how his experiences dealing with computers prompted him to write such a critical book about these machines. He opens the book with ELIZA, but unlike his article where he explained a wealth of technical details to a scientific audience, in "Computer Power," Weizenbaum tells the story of ELIZA in a way that highlights his own surprise at the reactions it elicited. Weizenbaum points out that he was astonished that practicing psychiatrists genuinely believed the program had therapeutic potential, acknowledging that the ease with which people invested emotions in their interactions with computers shocked him, and emphasizing his surprise at how many in his field seemed to believe that ELIZA represented a program capable of truly understanding the prompts it received in natural language. However, Weizenbaum did not simply dismiss these odd confrontations. Rather, as he stated, these experiences "gradually led me to believe that my experience with ELIZA was a symptom of deeper issues."
Weizenbaum emphasizes that computers are not the problem; rather, they merely embody a long-standing dangerous social tendency to view humans in increasingly mechanized ways. In Weizenbaum's view, a debate is erupting, "on one side are those who simply say that they believe computers can, should, and will do everything, and on the other side are those like me who believe that the things computers should do are limited." The existence of "should" is particularly important for Weizenbaum's argument, as it shifts the focus of the discussion from what functions computers can have to whether they should be built to perform those functions in the first place. For Weizenbaum, this is a question of "the appropriate place of computers in the social order," which also implies that computers have an inappropriate place. Despite Weizenbaum being a respected professor at MIT, he did not hesitate to point out that "science can also be seen as an addictive drug," as "with increasing dosages, science has gradually become a chronic poison."
What distinguishes computers from other tools people use is the degree of autonomy these machines possess — meaning that once they are turned on, they can operate without further human control. Clocks (in homage to Mumford) are an important early example of such autonomous machines, and computers also have autonomous capabilities, but their functions are far more significant than merely recording time. The significance of these machines lies in the fact that they operate according to models of certain aspects of the real world — for example, dividing a day into 24 hours, each hour into 60 minutes, and each minute into 60 seconds. Gradually, while mimicking certain aspects of reality, these automatic machines instill the model of reality into the humans who originally built the machines. This model replaces what it simulates. Thus, with the support of technology, "experiences of reality must be expressed in numerical form to seem reasonable in the eyes of common sense."
Computers emerged before and after World War II, and in the years following the war, they were seen by the military, industry, and commerce as the necessary tools to solve a range of problems that would be impossible for humanity to address without significant technological assistance. As miniaturization allowed smaller computers to fit everything from offices to airplanes, a transitional period steadily occurred in which computers were viewed as indispensable components of emerging modern society. The ultimate result of this trend is that returning to previous ways of doing things is almost unimaginable. However, just because computers are considered indispensable does not mean they are truly indispensable. On the contrary, what has happened is that computers have merely become "a necessary condition for social survival, with the form of the computer itself playing a role in shaping that condition."
Computers may seem to fundamentally change society due to their close ties to military needs, but in Weizenbaum's estimation, computers "are used to protect American social and political institutions. This at least temporarily helped them withstand tremendous pressures for change." In fact, in Weizenbaum's view, computers have enabled the reduction of social, political, and economic status quo, even though the widespread dispersal of computers has allowed mechanistic worldviews to take root in more areas — while computers have also been used to help support the explosion of post-war consumerism. Whether or not computers are necessarily the right solution, they have been welcomed in a range of fields, "for reasons of fashion or prestige" — if a person's competitor, whether a business rival or a competing superpower, has a computer, then they cannot afford to fall behind. While computers, as their name suggests, excel at computational tasks, there remain social challenges beyond mere computational capability — "the effectiveness of technology is a question involving both the technology and its subjects." However, the worship of computers is merely an adulation of "technology," often neglecting "its subjects." Yet even in an article written in 1976, Weizenbaum recognized that computers had become intricately woven into society, making it important to acknowledge that "the new modes of action created by society often eliminate the possibility of acting in old ways." Computers rely on specific types of information, excel at specific types of tasks, and are guided by specific social, political, and economic forces — even though computers may be portrayed as having opened many doors, Weizenbaum emphasizes that they have also closed many doors.
One particular issue represented by computers is that, unlike simple tools, many users of computers know very little about how the machines actually work. One reassuring aspect of computers is their regularity and the way they adhere to routines — but "if we rely on that machine, we become servants of laws we do not know, and thus of a capricious law. This is the root of our distress." To alleviate this distress, in "Computer Power and Human Reason," Weizenbaum delves into how computers work, demonstrating complex computational scripts through relatively simple games, emphasizing that people must remember that computers strictly adhere to the rules of the games. While Weizenbaum's discussions of "where the power of computers comes from" and "how computers work" may not be sufficient to turn a novice into a programmer immediately, these chapters still help clarify the realities inside computers. Weizenbaum explains the Turing machine in dense prose that tends to be technical and elaborates on how computers can stack multiple programs together, noting, "The alphabet of the machine language of all modern computers is a set composed of the symbols '0' and '1.' But their vocabularies and transformation rules vary greatly... computers are superb symbol manipulators." Despite the high efficiency of computers, they still must follow their programming rules and rely on their specific languages; a program's ability to successfully execute its script does not mean it has any real understanding of the world. In fact, as Weizenbaum states, "one real reason programming is difficult is that, in most cases, computers know nothing about those aspects of the real world that their programs are to process."
Computers and the programs they run are not organically present in nature. Rather, computers and their programs are physical manifestations of a series of choices made by humans. As a computer scientist and professor at MIT, Weizenbaum was very familiar with those responsible for decision-making that led to the computer systems that the broader public would ultimately use. Although Weizenbaum himself was a programmer, he did not shy away from criticizing his peers. For Weizenbaum, there is a distinction between "professionals" like him and what he termed "compulsive programmers," the former "view programming as a means to an end, rather than an end in itself." Weizenbaum likened such individuals to professional gamblers. He described "compulsive programmers" as those who view interaction with computers as an end in itself — even though such individuals may work on many projects, their primary goal is merely to continue working with computers or "hacking." Weizenbaum's description of "these computer rogues" is unflattering; they "exist only through computers, exist only for computers," sketching a stereotype of computer programmers that has persisted to this day — but what makes Weizenbaum's description particularly biting is that he is not imagining this type of person; he is describing a type he frequently encountered during his time as a computer scientist and professor at MIT.
In Weizenbaum's estimation, the allure of "compulsive programmers" to computers stems from a fascination and adoration of the power exhibited in computer systems. While "the pursuit of control is inherent in all technology," computers provide a space where skilled programmers can joyfully seize control. For "compulsive programmers... life is merely a program running on a giant computer," and thus "every aspect of life can ultimately be explained in programming terms." For Weizenbaum, the danger of computers lies in the fact that an increasing number of people engaged in computing work represent these "compulsive programmers," whose loyalty to computers has surpassed any other values. However, Weizenbaum does not attribute this to any evil but rather to a kind of hollow irresponsibility among certain programmers — as well as other modern scientists and technical experts — who conflate their technical scientific means with their own purposes. Although "compulsive programmers" may not lack skills, this "skill... is aimless, even nihilistic. It is simply disconnected from anything other than the tools it may be exercised upon." However, in Weizenbaum's description of "compulsive programmers," a key detail may be that this is a character who believes that all the complexities of the world can be simplified to the point of being captured by computer programs.
In some respects, computers represent a stage where programmers can write scripts for execution. Computers excel at executing scripts meticulously, following precise rules, so by examining the code they follow, one can understand what computers are doing. The situation for humans is far more complicated. Admittedly, humans process and respond to vast amounts of information, and people's behaviors follow some "laws that science can discover and formalize within certain scientific frameworks." Nevertheless, Weizenbaum expresses dissatisfaction with the idea that all human wisdom and understanding can be simplified to rules that conform to scientific frameworks — even though belief in the existence of such rules has guided the confidence of some artificial intelligence (AI) researchers. After all, as Weizenbaum states: what kind of equipment does a machine need to "think about human issues like the disappointments of adolescent love"? However, due to the existence of machines, the theories held by computer programmers possess particularly important qualities that make these theories more than mere texts. Because "a theory written in the form of a computer program is both a theory when placed on a computer and a model applicable to that theory when run." — computers provide a stage for script execution. Although computers may yield impressive results, Weizenbaum warns that: "a model is always a simplification, an idealization of what it seeks to simulate." Unfortunately, the complexity of computers and the inherent simplification of models often lead to widespread misunderstandings of what computers can and have accomplished.
"Computers have become a source of a truly powerful and often useful metaphor," yet "the public's embrace of the computer metaphor is based merely on the most vague understanding of a difficult and complex scientific concept." For Weizenbaum, the prevalence of this metaphor represents a dangerous trend, as those who do not fully understand how computers work gradually come to believe that everything in the world can be turned into a computer model. The computer metaphor allows the ideology of "compulsive programmers" to transcend those without programming experience, making them susceptible to the advice of those celebrating the latest technological achievements of computers. The result is that "the metaphor of computers becomes another lighthouse, under whose light people will seek answers to urgent questions only in its light." Of course, certain questions are indeed well-suited to be solved using computational methods — some of which even seem to demonstrate a certain achievement of human intelligence, such as chess skills — but Weizenbaum emphasizes that this victory relates to the computer's ability to perform calculations rapidly and execute logical programs. For some computer scientists, the types of problems computers excel at solving are almost synonymous with the types of problems humans attempt to solve, but for Weizenbaum, "it is precisely this unreasonable demand for universality that has lowered their use of computers, computing systems, programs, etc., from the status of scientific theory to that of metaphor."
Weizenbaum's personal experiences with the misunderstandings surrounding the computer metaphor are primarily related to his experiences with ELIZA. To get a computer to do something, it must be told to do something. This does not necessarily mean that the computer understands what it is being told; it merely means that its script enables it to execute a specific command. Humans excel at understanding "communication expressed in natural language," but computers, conversely, require "the precision and clarity of ordinary programming languages." Although Weizenbaum's program ELIZA can respond to prompts in natural language, the program itself does not understand what is being said to it — rather, it is merely following a script — and among those unfamiliar with how computers work, this illusion of understanding seems most pronounced.
ELIZA has many features, but intelligence is not one of them. For Weizenbaum, ELIZA demonstrates the way people are eager to attribute intelligence to machines, a tendency that is unnecessary — a strong inclination he observed among some of his colleagues in the field of artificial intelligence, whom he referred to as "the artificial intelligentsia." While Weizenbaum acknowledges that AI scientists have created programs capable of performing many tasks, he views this tendency as arrogant, believing that any shortcomings of artificial intelligence are merely "found within the programmatic limitations of specific systems." Those unfamiliar with computers have reason to be captivated by ELIZA's performance, but what is the excuse for "artificial intelligence"? Throughout his life, Weizenbaum witnessed tremendous leaps in computer capabilities and recognized that what is impossible for computers today may be possible tomorrow. However, the questions have increasingly shifted from what computers can do to what they should do — whether there are "appropriate human goals that are unsuitable for machines."
"Humans are not machines... computers and humans are not of the same species," is Weizenbaum's sharp rebuttal to what he perceives as beliefs that have allowed "the bizarre grand fantasies of artificial intelligence to develop." For Weizenbaum, intelligence is a complex and difficult concept that cannot be simply simplified. Therefore, attempts to ultimately define intelligence, such as IQ tests, are doomed to capture at most a glimpse of intelligence. It is clear that computers can succeed in feats that seem to indicate intelligence, but this also oversimplifies the matter. A computer may win at chess, but that does not mean it can change a baby's diaper. For Weizenbaum, these are "incommensurable" questions of intelligence. Computers excel at tasks involving quantification, but for Weizenbaum, humans possess many things that are fundamentally unquantifiable. Weizenbaum realizes that members of "artificial intelligence" may view this criticism as a scientific challenge, and they may attempt to respond to this challenge by creating more complex machines. However, for Weizenbaum, this is not a matter of academic dueling — it is a moral issue — because "the question is not whether such things can be done, but whether it is appropriate to delegate such human functions to machines."
Many things about the human condition remain unknown — many other aspects of being human can only be learned through the honing of human experience — these things are not easily quantifiable or programmable. If computers or robots eventually reach such maturity, exhibiting intelligence truly similar to that of humans, their intelligence will be quite "alien," as it will have been formed under different social conditions and will involve a range of different experiences. Beyond intelligence, there are issues of emotion and the unconscious, which are not easily constrained by calculative and cold logic — thus Weizenbaum warns to exercise restraint, as "some things are beyond the full understanding of science." This sentiment serves as both a warning to his scientific colleagues and a reminder to the broader public to remain vigilant against omnipotent claims. For Weizenbaum, this is fundamentally linked to the belief that some tasks are simply unsuitable for computers — "since we currently have no way to make computers intelligent, we should not assign tasks requiring intelligence to computers."
Weizenbaum's warnings come at a time when many tasks unsuitable for computers have already been delegated to them. When Weizenbaum was writing "Computer Power and Human Reason," computers had become a common feature in businesses and large organizations — even if they had not yet become fixtures in every household. Although these machines were initially sold with the promise that they would be helpful, the current situation is that these machines "have both surpassed the understanding of users and become indispensable to them." What was supposed to provide assistance has turned into something that renders people helpless. This suffering is particularly dangerous for those unfamiliar with how computers work, as they have been led to believe that computers are infallible, feeling that their agency and responsibility have been usurped by machines. However, as Weizenbaum fiercely reminds his readers, "the myth of technological, political, and social necessity is a powerful conscience soother. Its effect is to relieve every true believer of responsibility. But in fact, there are actors!" The world envisioned by the celebrants of the computer metaphor is thus depicted as inevitable — people feeling powerless in the face of machines begin to doubt whether expressing their opposition is meaningful. Rather than being likened to King Canute, they go with the flow.
However, Weizenbaum refuses to accept the solidified tranquility. The rhetoric of science and "technocratic intellectuals" portrays itself as logically sound, but for Weizenbaum, this is "instrumental reasoning, not true human rationality." In the worldview concocted under the support of the computer metaphor, the immense complexity of the world is transformed into "programmatic" things, where people talk about "inputs" and "outputs," discuss feedback loops, variables, parameters, processes, and so on, until all connections to specific contexts are abstracted away. This is a passive, indifferent, and impotent secret — because human life is merely a matter of data problems fed into machines and analyzed. An overreliance on "instrumental rationality," and the technologies that best conform to this reasoning, has become a "superstition surrounded by black magic. And only the magician has the right to enter." As a successful computer scientist, Weizenbaum himself could have been one of these "magicians," but he was not deceived by instrumental rationality. Indeed, Weizenbaum knew that his dissent would be viewed by technical experts as "anti-technology, anti-science, and ultimately anti-intellectual," but Weizenbaum emphasized, "I am arguing for reason. But I argue that reason cannot be separated from intuition and feeling. I advocate for the rational use of science and technology, not the mystification of it, much less its abandonment. I urge the introduction of ethical thought into scientific planning. What I oppose is the imperialism of instrumental rationality, not reason."
Weizenbaum's call for a reevaluation of ethics does not sound like Cassandra's cries, nor does he forget that such cries are often ignored. Science and technology do not represent a new force of human civilization, but the scale of their impact has greatly increased — such power has so seduced many that it has exceeded what can be safely controlled and managed, and it is gradually eroding the ability to choose different paths. Many people enjoy the superficial benefits brought by computer-driven technological advancements and hesitate to abandon these devices, which is a serious issue, but Weizenbaum believes that "ethics, fundamentally, is merely about giving up." If technological and scientific powers have turned humans into entertainment robots, what good is that? In a world already inundated with computers, what reason is there to truly believe that serious social and political problems persist due to a lack of computing power? If Weizenbaum calls for the abandonment of computers in certain cases, it is because the embrace of computers in all cases has led to the abandonment of humanity. As Weizenbaum reiterates in his writing, "there are some human functions that should not be replaced by computers. This has nothing to do with what computers can or cannot do. Respect, understanding, and love are not technical issues."
A keen and strong sense of responsibility drives Weizenbaum to write provocative articles about computers and those enchanted by these machines. Joseph Weizenbaum is a computer scientist and a scientist who teaches others about computer science, hoping that the messages in "Computer Power and Human Reason" resonate with his peers and colleagues. In Weizenbaum's view, "scientists and technologists bear a particularly heavy responsibility due to their abilities, a responsibility that cannot be cloaked in slogans such as technological necessity." Therefore, Weizenbaum believes it is crucial for scientists and technologists to reflect on the consequences of their actions, rediscover their "inner voice," and most importantly, "learn to say 'no'!"
The Need for Responsibility#
Hovering over Weizenbaum's social criticism is the memory of his childhood escape from the Nazis. In his work, figures associated with fascism do not appear as terrifying monsters or inexplicable evil monoliths, but rather as examples of real individuals who have lost responsibility for other parts of humanity. Adolf Hitler and the leadership of the Nazi Party are not detailed in Weizenbaum's studies; rather, his focus is on the image of the "good Germans" — those who feigned ignorance of the horrific events occurring around them. The carefully organized nature of the Nazi regime's excuses for keeping "good Germans" in ignorance does not convince Weizenbaum; instead, he posits that "the real reason good Germans did not know is that they never felt responsible to inquire about what happened to their Jewish neighbors, whose apartments suddenly became empty." After all, when the Weizenbaum family fled Berlin, their residence suddenly became empty, and what happened to the residents when they disappeared for more painful reasons? When Weizenbaum returned to Germany for the first time in the 1950s, he found himself looking at the Germans who had experienced the Nazi era, "wondering, sometimes more, sometimes less, what you did? Who are you? Did you remain silent? Or did you resist? Or did you enthusiastically participate?"
In addition to the "good Germans," Weizenbaum's professional connections particularly reflect on the lives of scientists who had worked for the Nazi government. In Weizenbaum's estimation, many German scientists during the Nazi regime donned the halo of "good Germans," exhibiting a stance of "we are scientists; politics has nothing to do with us; the Führer decides." This was a position many German scientists could conveniently hide behind after the war, as figures like Wernher von Braun played significant roles during the Cold War, ideologically justifying their past associations. The Nazism that swept across the continent and ended millions of lives has become something seemingly without accountability — a dark chapter raised as a warning but never read. The paradox Weizenbaum observed in post-war Germany is that suddenly it seemed no one was responsible for what had happened — citizens claimed ignorance, while scientists behaved as if they had no concern for how their inventions would be used.
The phrase "never again" is often quoted regarding the brutal rule of the Nazis, but in Weizenbaum's work, this "never again" takes on another nature, as if to say: "we scientists can no longer shirk responsibility." As Weizenbaum himself stated, "taking responsibility is a moral issue. Most importantly, it requires recognizing and accepting one's limitations and the limitations of one's tools. Unfortunately, the temptation to do the exact opposite is very great." From the computers critical to launching the Vietnam War to advancements in developing more destructive atomic weapons, for Weizenbaum, the work of scientists and technologists is crucial in supporting and enabling violence in the world. The establishment of these means of ending life relies on the tacit approval of science, and for this, Weizenbaum warns his colleagues to remember: "Without us, it cannot continue! Without us, the arms race, especially the atomic race, cannot continue."
Certainly, the scientific community finds it difficult to extricate itself from the military — as much of the research conducted by scientists relies heavily on generous funding from various departments of the Department of Defense. Weizenbaum is acutely aware of the impact this has on his work and that of his colleagues, as he is a professor at MIT, and he is well aware that "at MIT, we invented weapons and weapon systems for the Vietnam War... MIT is closely related to the Pentagon." This challenge facing the scientific community extends far beyond the university where Weizenbaum teaches, and he candidly states that scientists must understand the purposes for which their projects are being used. "Today, we can almost certainly know that every scientific and technological achievement will be used for military systems whenever possible." Even seemingly harmless programs may ultimately serve violent causes — for Weizenbaum, scientists must recognize this and confront its implications. Computers and science can serve humanitarian purposes, but this does not preclude concerns about their anti-human impacts.
Weizenbaum is committed to never allowing himself to become like the German scientists who appeared irresponsible regarding the impacts of their work — even as Weizenbaum observes that many of his colleagues are reluctant to heed his calls for strict responsibility. Thus, Weizenbaum becomes a vocal critic of scientists' obligations, a militarized opponent, and refuses to quietly sit in his office during injustices — throughout this process, he realizes he has become a kind of "fig leaf" for MIT, but he remains committed to being a prominent figure upholding moral principles. Weizenbaum is willing to call out "compulsive programmers," "technocratic intellectuals," and "artificial intelligence," and he is willing to challenge his scientific peers. While other prominent social critics with strong concerns about technology, such as Lewis Mumford and Hans Jonas, have written about the need for scientists to take responsibility — this criticism, when articulated by someone like Weizenbaum from within the scientific community, is given particular vitality. When Weizenbaum urges scientists to take responsibility, he uses himself as an example of what that might look like.
It is this commitment that leads Weizenbaum to engage in debates surrounding science and technology across multiple fields, not limited to discussions of computers and weapon systems, but also challenging "artificial intelligence." Weizenbaum strongly reacts to the tendency within the "artificial intelligence community" to shout that humans are machines. Their central argument is that the whole person can be understood solely from a scientific perspective. For Weizenbaum, this notion not only reveals a dangerous arrogance but also represents a "profound contempt for life." Marvin Minsky, also a professor at MIT, uttered a statement that particularly enraged Weizenbaum: "the brain is merely a meat machine," a phrase Weizenbaum repeatedly references in his writings. "Meat machine" seems to be a simple encapsulation of the views of the artificial intelligence community and emphasizes the principles of the technological metaphor. The idea that something as complex as human thought can be simplified to easily quantifiable amounts of information strikes Weizenbaum as absurd, even as he unfortunately realizes that this perspective is prevalent in "the artificial intelligence community, the AI field, and many important sectors of scientists, engineers, and ordinary people." Intelligence, whether human or artificial, is difficult to define, and thus even more challenging to quantify — yet statements about "artificial intelligence" often give the impression that they have reached a conclusive consensus on the meaning of intelligence. Through his study of ELIZA, Weizenbaum personally observed the various types of misunderstandings that could arise regarding computer intelligence and witnessed the views of prominent scientists in the field of artificial intelligence (such as Minsky), recognizing that technological optimism, if somewhat cynical, "the spirit of artificial intelligence permeates many currents in the field of computer practice."
Weizenbaum does not see himself as a critic of computers and technology but rather as a critic of society; however, his social criticism primarily focuses on the impact of computers and technology on society. Thus, Weizenbaum, along with other thinkers, "has expressed serious concern over the unrestrained development of science and technology for years." Although in his youth, Weizenbaum enjoyed the thrill of early computers and played an important role in their development, his experiences working with "technocratic intellectuals" and observing the impact of technology on society tempered his initial enthusiasm. At a conference held in Cambridge, Massachusetts, in 1979, Weizenbaum poignantly remarked, "I believe our culture has a weak value system that rarely uses collective welfare, and thus is catastrophically vulnerable in the face of technology." In such a context, the system that overestimates science and technology easily prevails, especially when it provides a comforting explanation for the powerlessness experienced by many. The juxtaposition between the potential of technology and its realization is a persistent contradiction — "on the one hand, computers make it possible to live in a world rich for everyone, while on the other hand, we are using them to create a world of pain and chaos." In his work, Weizenbaum frequently refers to the transaction with technology as a "Faustian bargain" — yet this still leaves the question of who should be held responsible for signing the contract with Mephistopheles.
At the conference in Cambridge, Weizenbaum particularly focused on the ways terms like "we" and "us" are used in discussions surrounding technology and science. A machine can emerge from the decisions of a few individuals in a laboratory and cause massive global impacts — yet the costs are often framed as "it will serve us." All of this contains "us," representing a strange sense of guilt. Weizenbaum has long criticized those "good Germans" who use ignorance as an excuse, but his criticism of these individuals stems from his belief that their claimed ignorance is an illusion, a deliberate self-deception. However, in the technological transformations of corporations and universities, the ignorance of what is happening is not false. When Weizenbaum was a child, he witnessed the inherent barbarism of the early Nazi regime, but the dangers of technology are not a matter for open debate or voting. When Albert Hirschman responded to Weizenbaum's questioning of "our" identity at the conference, he stated, "Every nation has the technology it deserves." Weizenbaum countered, "I do not believe that nations are more deserving of things than people, especially things imposed on them by others."
If Weizenbaum suggests caution in identifying "we" and "us," then he is even clearer about those he considers "the others" — because these individuals are members of the "technocratic intelligentsia." As the scientists behind the machines, wielding the metaphor of technology, the decisions of these individuals are ultimately "imposed" on the broader public. The necessary condition Weizenbaum stipulates is that these "others" must accept responsibility — "scientists and technologists can no longer evade responsibility for what they have done by appealing to the infinite power of society to change themselves to adapt to new realities and heal the wounds they have inflicted on society. Certain limits have been reached. The changes that new technologies may require may not be realizable, and the failure to realize these changes may mean the destruction of all life. No one has the right to impose such choices on humanity." Science and technology claim to be a panacea, changing the world in ways that render other remedies ineffective. Thus, Weizenbaum attempts to awaken the ethical imagination of his peers.
Weizenbaum seeks to encourage his colleagues to think from the perspective of utilizing technology and science to ensure "everyone has access to all the material wealth needed for a dignified life," and for those who view this goal as unrealistic, he disdainfully replies, "The impossible goals I mention here are as possible as our potential to destroy humanity."
When Joseph Weizenbaum passed away in 2008 at the age of 85, he had secured a place in the history of computer science, being both an important scientist and a major critic of the role of computers in society. Throughout his life, Weizenbaum stood at the forefront of significant transformations in computing, from mechanical behemoths that required entire rooms to personal computers and the early incarnations of smartphones. As he witnessed computers becoming smaller, more powerful, and increasingly connected to activities in daily life, his critiques remained steadfast. For him, the military origins of computers could not simply be forgotten as an inconvenient historical detail, and although he did not deny the impressive potential of computers, he remained aware that this potential often went awry.
Weizenbaum was not a prolific writer, but his articles and his book "Computer Power and Human Reason" continue to have a significant impact on scholars writing about the influence of computers on society. In "The Closed World: Computers and the Politics of Discourse in Cold War America," Paul N. Edwards argues that "tools and metaphors are linked through discourse," drawing on Weizenbaum's considerations of technological metaphors to discuss "how tools and their use shape a component of human discourse, and through discourse, not only directly shape material reality but also shape the psychological models, concepts, and theories that guide this shaping." Weizenbaum's rather harsh description of "compulsive programmers" has proven to be more than just a stereotype that can be easily dismissed. Sherry Turkle believes Weizenbaum helped expose the image of the "hacker," as "many first became aware of hackers with the publication of Joseph Weizenbaum's 'Computer Power and Human Reason' in 1976." Weizenbaum's descriptions of hackers/compulsive programmers have resonated over the years, even as these individuals have migrated from the dark corners of university computer labs to the boardrooms of large companies. Wendy Hui Kyong Chun writes, "Weizenbaum argues that programming creates a new form of mental disorder: compulsive programming," although she counters Weizenbaum's accusation that compulsive programmers find no joy in their pursuits, she certainly identifies some individuals, particularly Richard Stallman, "who [fits] Weizenbaum's description of hackers." Clearly, Weizenbaum's encounters with compulsive programmers allowed him to identify something real, even as his work surrounding those who embrace technological metaphors enabled him to predict its impact on discourse.
Certain aspects of Weizenbaum's thought have been questioned over the years, and some of his predictions have proven to be completely incorrect, such as his comment in 1978: "Will home computers become as ubiquitous as televisions today?" Weizenbaum countered, "The answer is almost certainly no." In a world of smartphones, tablets, laptops, and televisions, this "no" has proven to be incorrect, as these computers themselves are connected to the internet. However, even if some of Weizenbaum's comments about computers have become outdated, his arguments have not lost any moral weight. In "Computer Power and Human Reason," Weizenbaum presents a list of critics who oppose "the unrestrained development of science and technology," and Weizenbaum himself has been included in this group by later writers. David Golumbia critiques the rise of "computationalism" in "The Cultural Logic of Computation," defining it as "the commitment to the view that a large or even totality of human and social experience can be explained through computational processes," noting that "Weizenbaum openly opposed the view of computationalism and continued to write a compelling book about its problems." When Golumbia groups together the "notable scholars" he lists who critique "computationalism" — much like those who challenge "the free development of science and technology" — he places Weizenbaum alongside many who also appear on Weizenbaum's own list.
Thus, this book provides an important overview of the life and thought of Joseph Weizenbaum. In this wide-ranging discussion, Weizenbaum candidly shares with Gunna Wendt the issues he faced throughout his life — from ELIZA to AI to technological metaphors — and sees how his personal experiences influenced the positions he took throughout his life. Most importantly, this book showcases Weizenbaum's enduring vitality as a social critic regarding the role of technology in contemporary society. While other prominent critics, such as Mumford, passed away before they could write anything about the internet — the interviews presented in this book clearly demonstrate that Weizenbaum was unwilling to be deceived by the utopian veneer some attempted to drape over the internet. In the face of technological advancements, Weizenbaum did not lose his critical spirit. As Weizenbaum wittily wrote: "The internet is a big garbage dump — certainly there are some pearls in it, but you have to find them first." This book portrays Weizenbaum as self-aware and self-deprecating, yet still firmly committed to upholding moral principles — it is not a simple overview of Joseph Weizenbaum as a computer scientist, but a celebration of Joseph Weizenbaum as a complex individual, acknowledging the complexities of other humans.
Those seeking a simple celebration of technology will not find such a celebration in Joseph Weizenbaum's work. As a social critic, he finds, and continues to prove in this book, that he is committed to voicing uncomfortable opinions. "Computers, like televisions, are embedded in our crazy society. Everything is embedded in this society, and this society is clearly crazy." However, this observation does not fall into the nihilistic cynicism or fatalistic despair; rather, it believes that people can overcome their powerlessness and reclaim responsibility for their lives. For Weizenbaum, people must become "islands of reason" — even if doing so may isolate and alienate them, as such islands may still attract like-minded individuals willing to acknowledge that "perhaps we are now addicted to modern science and technology and need to undergo detox." Importantly, this is not a call for people to retreat from the world but a call for deeper investment in today's moral dilemmas. To make people aware of the madness of the world around them, when this awareness arises, "we should speak out, we should share with others what we have come to realize."
And this is precisely the goal Joseph Weizenbaum sought to achieve. This book is a map to discover the island of reason — the island of reason that is Joseph Weizenbaum.