Information in formation

increasingly unclear
8 min readMar 22, 2020

This is the first of a series of articles about information. I am collecting all my notes from years of teaching into these informal articles, and updating them as new information comes in and as my views change.

I have a love-hate relationship with technology. I make no secret of my love for technology. I got into computers fairly early in life and find real pleasure in programming, in computational making and thinking, which I use and teach as an alternative to “design thinking” (a rant about the latter may come in a future article; in the meantime, see here).

At the same time, I equally think it’s important to take a critical perspective toward technology. I don’t like screens and what they do to our selves and our social interactions. Technology is inherently tied to corporate interests which seldom align with those of people or democratic ideals.

This two-headed approach has caused confusion in students and academic colleagues; the Information Experience Design programme I ran at the Royal College of Art until recently approached this dilemma directly by contrasting information with experience, and questioning Design as a discipline. Overall I believe it was successful—in producing artists, designers, educators and researchers, by engaging with technologies critically by using them, and balancing that with an equal emphasis on analog tools and materials and a range of approaches from science and art. This series of articles comes out of my notes and seminars from my time running that programme, from 2012 to 2019. (Yes, I may do a series on experience after this one on information, but I’ve already written an in-depth article on experience here.)

So my aim is to bash these two perspectives—computational versus critical—together and see what happens. Maybe they will find harmony in some new theory of information, but maybe they will just remain opposite poles which can co-exist. The world is full of contradictions and paradoxes, and holding multiple perspectives is a necessary condition today.

A computational view

Let’s begin at the beginning:

The big bang at the beginning of time consisted of huge numbers of elementary particles, colliding at temperatures of billions of degrees. Each of these particles carried with it bits of information, and every time two particles bounced off each other, those bits were transformed and processed. The big bang was a bit bang. Starting from its very earliest moments, every piece of the universe was processing information. The universe computes. [1]

This is our starting point, courtesy of quantum physicist Seth Lloyd. Everything computes, and information is everywhere, from the tiniest particles to the large-scale structure of the universe.

We didn’t always see it that way. ‘Computers’ were humans (most often women) before Charles Babbage came along. Along with his collaborator, Lady Ada Lovelace, he was what we would call a hacker — a restless and relentless tinkerer, picking locks and cracking ciphers and taking apart toys. He studied just about everything, was good at mathematics, and spent much of the 19th Century building the first calculating machine, the Difference Engine. The first computer, made of brass and pewter, with lots of moving parts — its gears and rotors and cogs gave data physical form, and conversely, abstracted information out of its material state.

Half a century later, Claude Shannon and Alan Turing began to encode the world into numbers, digitising and decoding, making the calculating machines themselves into abstract entities — logical machines.

Another half-century on, by the 1990s computers had shrunk from room-sized behemoths to boxes sitting on our desktops and laptops. Computers had come to pervade not just our work and home lives but our intellectual lives: cognitive scientists described the human brain as a computer, biologists regarded cells as computers of genetic information, physicists viewed the universe as a computer. Just as, in centuries past, industrial machines and clockworks were similarly used as metaphors more broadly.

In the 1990s, Mark Weiser looked ahead to when computers would shrink to almost nothingness, disappearing into things and walls — a coming age, he said, of ‘calm technology,’ and he imagined scenarios in which walls, windows and appliances would communicate with, help and support us. Researchers like Hiroshi Ishii at the MIT Media Lab began to explore new ways of interacting with computers, getting them out of their boxes, and replacing keyboards and mice with materials from the pre-digital age, like paper, wood and metal — harkening back almost to Babbage’s time but with hindsight about what the digital can do when it meets the physical. Computing as craft, information as another material to work with.

Then the artists began to get hold of them.

Actually, from the 1960s, groups like the Computer Arts Society in the UK and Experiments in Art and Technology (E.A.T.) in the US produced some wonderful work with early computers. (Here’s an example I’ve written about.) But that required some programming wizardry, a requirement that continued into the 21st Century. Even as computers have shrunk down to ‘smart dust’ and ‘radical atoms,’ getting them to do what we want still sometimes requires some pretty hardcore coding and engineering (or at least knowing some hardcore coders and engineers).

But beginning in the 1990s the situation began to change. People like Dan O’Sullivan at NYU began to deconstruct the technology, strip it back to its electrical essence, question and push against it, make it adapt to humans instead of the other way around. If you understood some basic concepts about circuits, switches, voltage and resistance, he explained, you could get things to do what you want by embedding electronics into them.

This was made easier by those ever-shrinking computers. Most of the time you don’t need a supercomputer to get interesting results. O’Sullivan taught his students at NYU to use small devices called microcontrollers — simple computers that did one or two things well. They didn’t need an operating system, consumed very little power, were robust enough to be installed in cars and refrigerators, and were cheap as chips (in fact they were and are chips, often no bigger than a fingernail).

One of O’Sullivan’s students, Tom Igoe, helped develop an even simpler one, designed especially for hackers, hobbyists and artists, called the Arduino. The stroke of brilliance was to make it open source — both the hardware and the software used to program it — effectively wresting control from both the manufacturers and the programming priesthood.

All of this began to open up new ways of conceptualising computation, now that almost anyone could embed it in the real world. O’Sullivan and Igoe’s 2004 book Physical Computing begins as follows:

We believe that the computer revolution has left most of you behind…. We need computers that respond to the rest of your body and the rest of your world.

Around the time that book came out, across the Atlantic, Walter Van de Velde was approaching computational thinking from the other direction:

We explore the idea that the world that we live in can be viewed as a kind of computer that continually computes its future…. We want to apply the idea of computation to the inhabited world of the natural, biological, cultural, technical social system that we live in. [2]

This is computer science without computers. It’s the universe as computer but at human scale. In fact, Seth Lloyd was writing at the same time; by the 21st century, computation seeped into everything. O’Sullivan and Igoe detail how to deconstruct and embed computers into the world, and Van de Velde shows how to program the world itself, by influencing human attention and behaviour. (More on Van de Velde’s approach here.)

Such programs for the world-as-computer need not be complex. Stephen Wolfram, through years of designing software, came to believe that simple programs can create great complexity, and proposed that something like such programs might underlie many natural and social systems. And now, led by scientists like Lloyd, information in physics is seen as being more fundamental than matter.

Specifically how this works, according to Lloyd, is as follows: “Every physical system registers information, and just by evolving in time, by doing its thing, it changes that information, transforms that information, or, if you like, processes that information.” Even something seemingly static like a rock, a coin, a house is registering information simply by existing; because everything exists in time, it evolves (however slowly), and as it transforms, it registers information. Like Van de Velde therefore, Lloyd is interested in just how the world processes information.

In biology, too, information (in the form of DNA) is seen as the very basis for life, which is seen to arise from algorithmic processes — when a collection of cells reaches a certain level of organisation, a transition is believed to occur, a change from bottom-up self-organisation to top-down control, when the system becomes more than the sum of its parts. More is different: as Marx summed up long ago, quantitative differences become qualitative ones.

“Within its space,” writes philosopher Henri Lefebvre, “the living being receives information. Originally, before the advent of the abstraction devised by human societies, information was no more distinct from material reality than the content of space was from its form: the cell receives information in material form….”

Indeed, all the cells in our bodies receive, process and exchange information. Leah Kelly, unusually for a neuroscientist, became interested in the human gut — which contains more non-human microbes than human ones — and its effects on cognition, from mood to autism. “From our conception,” she says, “every moment is a delicate (and possibly violent) exchange of information with the other, a necessary environment that forms and informs the self.” [3]

The apparent convergence of the sciences around information is aptly summed up by Michael Brooks: “It is becoming clear that many of our most complex challenges are, in fact, just one challenge: understanding information processing, whether it is carried out by the human brain, the universe, DNA or a silicon-based computer.”

But if everything, from elementary particles to microbes to people to galaxies, comes down to information, does that mean we can go in the other direction? Can we get “it from bit,” as the late physicist John Archibald Wheeler put it?

Another physicist, P.W. Anderson, didn’t think so. Writing back in the late 1960s, he decried the efforts by all types of scientists to reduce everything to some fundamental laws, then try to apply those to phenomena at all scales. This, he said, doesn’t work because of scale and complexity. As in the algorithmic definition of life above, new properties appear with each new level of complexity. The title of his challenge to other scientists was, in fact, “More is different”.

But Anderson, in railing against the scientific establishment, recognised that while complexity inevitably increases, we need at least to start with reductionism. From there we can proceed to what Bruno Latour calls “unreducing” (irreduire), unpacking and deconstructing.

Go to the next article in this series.

Notes

  1. Lloyd, S. (2010) The computational universe. In Davies, P. and Gregersen, N.(Eds.) Information and the Nature of Reality. (Cambridge Univ. Press)
  2. Van de Velde, W. (2003) The World as Computer. Proceedings of the Smart Objects Conference, Grenoble.
  3. Kelly, L. (2016) “Self of Sense,” in Jones, Uchill and Mather (Eds.) Experience: Culture, Cognition and the Common Sense (MIT Press)

--

--