Cyborgs R Us

Like a few million others, I got a new iPhone for Christmas. I think it has 56 gigabytes of memory. I’m eagerly anticipating the release of the Apple Watch in a few months. I’ve promised my daughter I’ll buy her one and doubtless I’ll get one for myself. Then I’ll be a full on cyborg with my heart rate, step and activity patterns, sleep (or not-sleep is more like it) record, and I can’t imagine quite what all else (but I want it!) will be recorded and readable to me (and maybe millions of others, but I can’t comprehend why they would be interested) on my iPhone, my iPad, my MacBook Pro with its retinal display (which I’ve never quite understood, but know it is “good,” no “better than good … as in great”), and my Mac (sitting there on my desk all lonely because it can’t get up and go). I’ve had a FitBit exercise monitor for years and have dutifully entered on my FitBit webpage every morsel I’ve put in my mouth; never mind that I’ve gained weight despite never (in almost 3 years) having had a day where my caloric intake was more than my burn. Crap! I step naked on my FitBit scale every morning and the results (not just weight, but also percent body fat … however it knows that) are automatically sent to all my devices. Every morning I get a cup of coffee and sit down to check my stats … and then fire up my financial tracking program to monitor my “total net wealth” (thankfully it is above zero). This is 2014 and I’ll soon be 72.

In 1964 (I’m pretty sure) at age 21 I took a job with the Coleman Company in Wichita Kansas. I had an undergraduate degree in mathematics, which I did because my mom told me to (her reasoning was that it meant a good job … hmmm). I finished the BS in 3 years and entered the business school to get an MS in business (not no stinking MBA). With the math background I believed I could ride the coming wave of technology into the business world. And I did … at least for the several years I stayed the course. My job at the Coleman Company included an active role in developing business applications for the first computer purchased by the company.

Entering the office building in Wichita on St. Francis Street, the door immediately to the left was the new computing department. The outer offices contained a few desks for secretarial staff with a couple glass cubicles in the back for a couple managers. Then in a separate room was the computer. It was an IBM 1401 that stood a good four feet tall, was maybe two feet thick and four and a half or five feet long and it boasted 4k memory (I’m not kidding). This machine was introduced by IBM in 1959 and by the mid sixties, it was at its height of popularity with 10,000 installations amounting to more than half of all computer installations world wide.

Coleman sent me to Kansas City and other places to take courses offered by IBM to learn programming and to understand how to use the computer for business applications. At the time I was still adept at using a slide rule that I’d used for the physics classes for my minor. In Coleman’s accounting department comptometers were the mechanical computing device of choice. I well remember the older gentlemen who could operate these big machines contorting their fingers to fit the banks of keys and counting off the punches of select keys shifting from position to position to effect a multiplication. The spinning wheels at the top showed the results of the calculation and then they would enter the results on huge spread sheets. There was much potential to improve operations for the company with the amazing power of punched cards and the new IBM 1401.

Though I had considerable potential at Coleman I stayed there only three years. During that time we developed many business computer applications for the IBM 1401, we expanded its storage to include disk drives. These were huge multilayered disk platters that fit in a squat machine that looked a bit like a squared off R2D2. I still remember learning how the information was stored and retrieved on the magnetic surfaces of these disks. These disks could be removed and stored in cabinets with their own plastic covers. We also had a huge printer that printed on continuous flow paper; the paper was familiarly called “computer paper.” Before I left Coleman the IBM 1401 was replaced by a much more powerful IBM System/360 which had core memory capacity of from 8 to 64K and a different basic 32-bit architecture (the IBM 1401 was only 6-bit). I worked so much on this machine that I could actually read streams of machine code and I could understand the physical architecture that comprised the memory of these machines. And this fact is important.

I’ve been reading Walter Isaacson’s “The Innovators” that tells the story of the origination and imagination of modern electronic computers. I find it remarkable that the idea for an electronic calculating machine arose with Ada Lovelace (the daughter of Lord Byron) and Charles Babbage before the mid-nineteenth century. But it would not be until the late 1930s that the first functional calculating machines were developed. And it was then not until my early lifetime, the end of World War II in 1947, that the first actual electronic binary computers were operational; single hulking machines that would make the IBM 1401 look sleek. It was the war and military needs (like calculating firing trajectories and the complexities of building an atomic bomb) that supported the development of these early computing machines as well as the awareness that they needed to be programmable; the earlier machines were physically constructed to do but a single kind of calculation.

But in less than twenty years, 1965, I was working on applications to run a sizable business with a 4k computer; in 1969 the successful Apollo mission to land on the moon relied on a computer with 1 mhz processing speed and the rough equivalent of 4k memory.

In 1967 I decided I needed to consider what I was doing with my life and took a leave of absence from the Coleman Company. Intending only to be gone a few months, at most a year, I enrolled in the Divinity School at the University of Chicago. What an odd choice! But that is another story. Because I was such a weirdo Chicago didn’t offer me any financial assistance, so I needed to rely on a job to get through school. I found a job as a systems analyst at the University of Chicago computer business applications department. This was not an academic unit, but the department connected with accounting that supported all the business applications for the University of Chicago including its massive hospital complex. I was right at home because the whole operation depended on IBM 1401 computers and a sea of punch card operators. I worked in that department to support myself throughout my studies there from 1967 to 1973. My experience at Coleman put me in demand and even though I never worked full time I was put in charge of designing and implementing a payroll system for the entire university system including all the various kinds of employees of the hospital. I recall, fondly actually, working a number of all-nighters in a basement room in the main hospital building doing the final tests and debugging in preparation for the live start-up of this massive payroll system. Remarkably it worked.

A couple other memories need be added. In the early years on the faculty at Arizona State University—it was in the mid-‘70s I suppose—university faculty began to use personal word processing machines. The earliest versions were strange typewriter-like things with tiny screens. They eventually included a larger screen but these word processors cost over $4,000 at that time! Quite a few faculty invested; not me. I scoffed at them wanting desperately to leave all that computer life behind me. I recall writing my first several books on a typewriter where “cut” and “paste” really meant using scissors and paste. My method was to type the book once; then cut and paste and mark up this ungainly mess with arrows and notes; the type it again and send it to the press. I remember goading my colleagues by asking them if their fancy word processers led to them publishing any more than they would have without them. I laughed at their constant blather about the technical issues they were experiencing. I felt they seemed more interested in what today we’d call geek prattle than in their own research and writing. It wasn’t until the mid-‘80s that I got a word processing computer for academic writing. My kids had an Atari that was modestly programmable that had I believe 4k memory as well; what’s with this 4k thing? I recall at the time being stunned that this little tabletop game device was as powerful as that IBM 1401 I’d been on so many dates with. My computer had a 12 inch (maybe?) orange glowing pumpkin-looking monitor with 40k memory and I well remember thinking that I could never use all that luxurious memory. I also resisted email when it appeared in the mid-‘90s. I suppose it was well into the 2000s that I remember being on a weekend retreat with undergrad majors and grad students. While on a hike I heard them use the term “gigabyte.” I’d never heard it before and had to ask what the hell they were talking about. Hmmm.

Now with my sleek little 56-gig iPhone sitting on the arm of the chair beside me it is essential to try to gain some perspective. Often when we say “gain perspective” we mean something like figure a way to see something so it won’t be so overwhelming. I think the opposite holds here. For decades we’ve known of Moore’s Law that predicts the doubling every two years of the density of transistors in a given space in computer hardware; suggesting a doubling of speed and storage capacity, but also demand. And while I suppose we sort of understand that doubling ever two years soon mounts up to quite a heap, we may be underwhelmed with the small factor, the number 2. I’ve been tying to imagine a way to whelm and overwhelm because I think this response is appropriate to “gain perspective” especially as we look to the future.

So take just several key numbers. Four thousand is the storage capacity of the IBM 1401 as well as the Apollo moon mission computer both in 1969. Next is the number forty thousand that was the storage capacity of my first computer word processor in the mid-‘80s. Finally let me leap to my iPhone with its 56-gig memory and a little silver box I have that is about as large as 2 iPhones that is my terabyte external memory drive (and maybe my MacBook Pro has this much memory as well, who knows?). I recall the use of an analogy to help us imagine quantities of money as in the national debt, but it works well here as a measure of the change in computing and information technologies in my lifetime. The analogy is based on the simple question how long would it take to count to a given number if I count at a rate of one per second. So I start out “One, two, three … and so on” at the rate of one per second; the task is to figure how long it will take to arrive at a given number. The tendency is I think to consider the increases between thousand and millions and billions and so on as by something of a factor of two or so; but vaguish. It is interesting to me that often on news programs after describing something of the size of billions the newsperson will often say something like, “that is Billion with a B,” as though million and billion might be easily confused and I think they are. Million is seen as “a lot”; billion is seen as “a lot more.” Okay, so I’ll switch to a table for effect. First column is the year of the benchmark number drawn from my personal narrative, which is in the second column; the third column is the approximate time it would take to count that benchmark number.

1965               4,000 bytes                            1.1 hours

1985               40,000 bytes                          11.1 hours

2015               56 gigabytes                          1,775 years

2015               1 terabyte                              31,710 years

I know you are madly checking my math! I did as well, several times. You do so, you have to do so, because of a feeling of incredulous overwhelm. All the more remarkable in that the largest changes have occurred in the last decade with the expansion of the Internet and the rise of social media and Google searches and YouTubing that are our constant companions.

I have been fascinated that the lifespan of electronic computing is almost precisely coincident with the span of my own life. Not only that but I have been actively involved over the last half century in a persistently active way with the development of electronic computing.

Two things more to say. First, is that when I was a systems analyst and programmer for the IBM 1401 I could understand the architecture of the machines in the finest detail. Schematically it amounted to the equivalent of a gridwork of wires where at the intersections of wires crossing on the grid they were threaded through a doughnut shaped electromagnet. On the diagonal of this grid were other wires that also threaded through the hole in the magnet. These grids were stacked one on top of another with the aligning intersections forming bits as components of a byte which was comprised of the stack of associated bits. Each bit could be either “on” or “off” representing values “0” or “1” and the combination of the zeros and ones in the stack would comprise the value of a “digit” or number. The more layers, the more the bit value of the computer. With each added bit the value of the number representable in a byte would double. One set of wires was responsible for turning on or off the bits; another set was responsible for evaluating whether a bit was on or off. That was the basic architecture … sort of anyway.

In contrast, today the architecture is so much more complex and condensed and the processing speeds are so high that it takes someone highly trained to begin to grasp the architecture. Beyond that the algorithms that are in play for search criteria are so sophisticated and the impact on the change to these algorithms of their own work is so massive and sophisticated that it is doubtful that anyone can understand computer operations in anything like the way I was able, with but little training, in the mid ‘60s to understand the IBM 1401.

The issues raised for me are many and not so easy to briefly articulate. First is the shift in attitude that has occurred during this half century of the rise of electronic computing. The behemoths of the late ‘40s that were programmed by physically plugging in wires and the 4k machines that, in the mid ‘60s took humans to the moon and ran large companies, the architecture and processing were comprehensible and describable with little specialized technical knowledge. Yet these computers were peripheral to our personal lives through the ‘80s and even pretty much until the end of the 20th century. However, in the last twenty years computers have become not just personal, but even intimate. They are our constant companions never more than an arm’s length from us. Most of us are now gesturally conditioned to feel the need to connect with our computers frequently, nearly constantly.   I understand these devices to be prosthetic extending us into the world both as a realization of who we actually are as well as an effort to elevate and transcend our experienced limitations. Our computing devices are and have always been a prosthetic extension of ourselves; who we are and who we desire to become.

I know that some folks reflect on the relationship we have with our computing devices with a sense that the devices have come to be in control. That they are transforming us somehow; that our lives are diminished as a result. We all entertain this view at times. Yet, it is my sense that a more interesting approach is to see these devices as prosthetics that extend us, that are like any tools that magnify the capacities of our own moving bodies. As bodies are extended prosthetically through the gestures and tools, pretty much the processes and outcomes of human history, there are always positive and negative aspects. Spear points gesturally used as weapons may provide sustenance to self and family, but they also amplify the destructive power of conflict and war.

The nature of the connection we have with our computing devices seems to me also at the heart of any important discussion of artificial intelligence, a discourse long associated with Alan Turing but dating back to Ada Lovelace. Lovelace wrote, the computer “has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.” Turing however supported the idea that computers can learn, even learn from the accumulation of their experience and thus have “intelligence.” The debate has progressed through many arguments and counter-arguments (which I look forward to considering in greater detail at another time), but I want not to suggest that AI can be understood in terms of gesture and prosthesis. As the first images drawn on the walls of caves occurred by gestures that entooled the body to prosthetically extend itself into the world. This prosthesis enabled the creation of external memory and a prosthetic reach beyond the individual presence or even lifetime. It is nothing very radical to bestow “intelligence” on these objects because they are prosthetic extensions of us. As we have memory and the capacity to think and learn, so too do those extensions of our memories and intellects. Yet, the memory and intellects that are projected prosthetically onto objects cannot be realized apart from their connection, albeit later or by others, with us. There is a looping connection between us as gesturing beings and the objects of prosthetic projection. From the perspective I am suggesting, the test for artificial intelligence is its capacity to come into being and to self-generate and to function with something resembling what we call intelligence independent of this looping connection with gesturing human beings. Such a condition is, of course, one of the most common themes in modern science fiction. The computer that becomes independent of its makers and takes over command; but more importantly becomes independent, not dependent on the looping connection with humans for its intelligent existence. In this case even the term “intelligence” comes into new light because were these devices truly independent then what we comprehend as “intelligence” should no longer be allowed to pertain as criteria simply because the word applied to anything is a human gestural prosthetic act. True artificial intelligence would, it seems to me, have to be totally independent and in their independence wouldn’t likely appear to be “intelligent” but simply baffling. Intelligence is invariably a way we gesturally prosthetically appropriate and relate to that which is beyond; it makes what is not us into a projection or extension of us that it might be comprehensible.

Finally, the future. The calculation and processing speeds of computers as also their capacity to store and manipulate information have outstripped ordinary human capacities. We are truly enhanced by these devices. They have evolved from huge clunky unwieldy machines to wearables and implantables allowing the body to move gesturally in simple and unencumbered ways. As these devices become increasingly inseparable from us, even inconspicuous, then we become enhanced in the most natural-seeming ways. We are capable of moving about freely with constant instant access to endless bodies of information and enormous capacities to calculate, search, organize, communicate, learn, and interface with everything about us. As we become cybernetic organisms I suggest that we become more, rather than less, human. The invention of warm clothing (a prosthetic extension of the body by creating an artificial insulated and protective skin) that allowed humans to live and exist in a much greater range of climates and conditions did not lead us to consider clothing as making us less human. The invention of books to hold memories and information did not make us less human. Nor should the invention of electronic devices that simply expand our capacities to process and store information. These means enable us to realize our potential as they expand our imagination leading to the further extension of our potential.

What is daunting however is to even begin to imagine the potential enabled by the exponentially accelerating rate of development made possible by these gestural prosthetic expansions. We are actively remaking ourselves as is the mark of human beings, yet the rate and scope of change is truly amazing.

As my lifetime has coincided with the rise of the electronic digital age (though I think the digital age goes back as far as humans who pointed a digit to designate an object at a distance), I would hope that the next century will be one challenged to re-naturalize the prosthetic extensions. It seems this has already begun. The inevitable consequence of Moore’s Law is unbelievable miniaturization allowing the integration of electronic devices into the organic body. The next step will be to understand how to shift from the electronic to the organic. This too is the projected future of computers. Future computers will be made of organic material allowing them to be wholly re-naturalized with the organic human body. Rather than becoming increasingly artificial and mechanical and electronic, surely we will become increasingly natural, bodied, and organic yet enhanced in now unimaginable ways in continuity with the exponential progress we are now tracking. Though we will most certainly be cyborgs, most importantly we will be human organisms enhanced in ways seemingly most natural to human architecture. Once again we will be enhanced to move about freely and actively throughout the span of our lives enjoying both cybernetic and organic connectivity and experience. I want to be here for that.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.