API stands for Application Programming Interface. In the context of Media Futures, an API routes the output from one's own unique Attention-processing Algorithm into an Alchemical reaction triggered by the convergence of other human-driven APIs.
I wrote my first post about APIs in the Spring of 05, at a moment when APIs such as those of Flickr and del.icio.us were just starting to become becoming popular targets of developers. Since then, the subject of APIs has become commonplace in any discussion of the future of media. In fact AOL- that stalwart of old new media- is now obsessed with open APIs. Tina calls it the “the liberation of egosystems.” Open data transport has suddenly become de riguer among the even the most traditional media companies. In less than two weeks, legions of their corporate development executives will descend upon SF to walk down the red carpet of the O’Reilly ceremony, ready to sign the top Web 2.0 talent to long-term studio deals.
But while we all fall over ourselves to proclaim our “openness,” we introduce a far heavier burden of trust into the mix. Is one company’s “open” the same as another’s? While I may be able to avoid data lock-in in that silo, how do i know for sure the next “open data” silo will be equally amenable to the mobility of my data? These questions beg a deeper investigation into the history of APIs and their evolution both physically and electronically.
In a memorandum dated July 15, 1949, Warren Weaver, who held the position of director of the division of natural sciences at the Rockefeller Foundation from 1932 – 1955, wrote about the possibility of language translation by an electronic computer. It was the first suggestion most had seen that such a thing might be possible, and as he draws the memorandum to a close, his words preview the emergence of the API:
Think, by analogy, of individuals living in a series of tall closed towers, all erected over a common foundation. When they try to communicate with one another, they shout back and forth, each from his own closed tower. It is difficult to make the sound penetrate even the nearest towers, and communication proceeds very poorly indeed. But, when an individual goes down his tower, he finds himself in a great open basement, common to all the towers. Here he establishes easy and useful communication with the persons who have also descended from their towers.
Thus may it be true that the way to translate from Chinese to Arabic, or from Russian to Portuguese, is not to attempt the direct route, shouting from tower to tower. Perhaps the way is to descend, from each language, down to the common base of human communication – the real but as yet undiscovered universal language – and then re-emerge by whatever particular route is convenient. Such a program involves a presumably tremendous amount of work in the logical structure of languages before one would be ready for any mechanization.
The key to examining the evolution of the role of the API in context of Media Futures lies, in fact, in the multiple resonances of its last term, Interface: as a surface lying between two portions of matter or space, thus forming their common boundary; as a means or location of interaction between two systems or organizations; as an apparatus designed to connect two scientific instruments so that they can be operated jointly, the abstract concept of an interface contains in it the possibility of a very literal connection between two beings, two faces. As a physical interface connects two pieces of hardware, a user interface connects a human and a computer and a software interface connects separate software components so that they may communicate with one another. To interface is to come into interaction with a thing or being, to communicate, in manners both figurative and literal.
As a platform that allows a computer system, library or application to open itself to use by other computer programs, or to allow for the exchange of data between them, the APIs of yesterday were IBM mainframes and Microsoft SDKs, arcane languages of translation between hardware and software.
From the late 1950s through the 1970s, a number of American, German and British manufacturers (Burroughs, Control Data Corporation, General Electric, Honeywell, NCR, RCA and UNIVAC; Siemens and Telefunken; and ICL, respectively), produced such mainframes, computers used in large part by companies and government institutions for the purposes of bulk data processing in the context of, for example, the census or financial transaction processing. IBM secured itself a position of power in the industry with the development of its 700/7000 series, based on vacuum tubes and transistors, and with its 360 series mainframe. Unveiled in 1964, the 360 series was to be an all-around computer system, a series of compatible models for purposes both scientific and commercial – a series which, moreover, brought together features which were once only available in scientific or commercial computers, such as floating point arithmetic in the former and decimal arithmetic and byte addressing in the latter. The 360 series also included supervisor and application mode programs and instructions and built-in memory protection facilities, making it one of the first computers manufactured with provisions specific to the use of an operating system.
Console of an IBM 360/67 mainframe
As the demand for the older mainframe systems fell off, new installations were seen mainly in the realms of finance and the government. Personal computer networks came to challenge the mainframe. It was during the rise of personal computing networks, though, that the APIs with which we are most familiar came into being and, in the case of Windows, achieved dominance.
In 1975, the Altair 8800 was introduced in Popular Electronics, a personal computer that was affordable, user-friendly, and, some argue, the spark that set Apple Computer and Microsoft ablaze in their development of personal computers. The Apple II, though less capable and versatile than some of the larger computers of the day, gave computer enthusiasts an environment in which to develop their own programming skills and to operate simple office and productivity applications.
The IBM PC released in 1981 took the personal computer into the realm of business, giving individual users word processing programs, spreadsheet programs and database programs which would change the way businesses stored, sorted and used their data. Four years later in 1985, in order to compete with the graphical user interfaces made popular by Apple, Microsoft released an add-on to MS-DOS – an operating environment known as Windows.
Though that release of Windows was not an operating system in the full sense of the term, it had pushed beyond the characteristics of a typical desktop, adopting some functions of operating systems. Windows achieved a leg up on competing systems due in large part to the fact that MS-DOS dominated the early landscape of personal computing. But the dominance of Windows (up until Google that is) is the API. The APIs which enabled professional programmers to develop desktop applications on top of platforms (perhaps most notably the Microsoft Windows API), have now given way to APIs which feed off of the platform of the Internet. And while Microsoft and the desktop are controlled by physical bodies, the Internet, despite the fact that certain companies do, in fact, oversee enormous pools of user data and have the ability to direct traffic as they see fit, is not governed by a particular body or set of bodies. If the power flow of yesterday’s APIs was a vertical one, headed at top by the executives of companies like Microsoft, which allowed programmers to work off of their platform to develop applications to be used by the users at the bottom, we might see the power flow of today’s APIs as closer to a horizontal one.
Next: The Thrilling Poverty of Physical Gestures