Alvy Ray Smith is out to change how you think about pixels

By Harry McCracken

I was only a few pages into Alvy Ray Smith’s new book, A Biography of the Pixel, when I realized that all of my preconceptions about what it might be were wrong.

A legend of computer graphics, Smith is one of the people who bootstrapped the entire field in the 1970s. Then, as cofounder of Pixar, he helped get it started on the trajectory that would take it to its greatest heights. But to describe what he’s written just as a history of computer graphics would be woefully inadequate. For instance, French mathematician Joseph Fourier (1768-1830) is as big a player in this book as anyone who actually lived to see the computer age. And relatively little space is devoted to moments Smith witnessed firsthand.

It turned out that even my understanding of the term pixel had little to do with Smith’s definition. A Biography of the Pixel isn’t about blocky little elements on a computer screen. Instead, one of its key points is that pixels are invisible. Rather than having a one-to-one relationship with those on-screen blocks, they’re the behind-the-scenes data points used to conjure up a picture—math made into magic.

When I got to speak with Smith, he was unfazed by my admission that I hadn’t even grasped what a pixel was. “Nobody does, Harry,” he said. “That’s why I wrote the book.”

Alvy Ray Smith is out to change how you think about pixels | DeviceDaily.com

Alvy Ray Smith’s book comes out on August 3 from MIT Press.

Like the pixels that power the imagery all around us, A Biography of the Pixel is a dazzling game of connect-the-dots. Smith isn’t just a technologist: He’s also an expert historical spelunker (and a distinguished genealogist who’s received that field’s highest honors). As he wends his way through the annals of visual storytelling and scientific progress, he uncovers forgotten figures, messes with conventional wisdom, and explains some deeply technical issues in an approachable manner. The book’s scope is expansive enough to include everyone from Napoleon to Thomas Edison to Walt Disney to Aleksandr Solzhenitsyn, along with icons of computer science like Alan Turing, Claude Shannon, and Ivan Sutherland. Yet it never feels like Smith is chasing wild geese. Everything fits together, in ways you might never have expected.

Smith’s own story, though only one slice of the book, is pretty momentous itself. As an electrical engineering student at New Mexico State University, he generated his first digital image in 1965. In the 1970s, he worked with Richard Shoup on an important program called SuperPaint at Xerox PARC. Then he moved on to the graphics lab at the New York Institute of Technology, where he and colleague Ed Catmull helped the school’s idiosyncratic founder, Alexander Schure, pursue his vision of utilizing computers in the production of film animation.

Alvy Ray Smith is out to change how you think about pixels | DeviceDaily.com

Alvy Ray Smith

[Photo: Richard Kerris]

It was at NYIT that Smith and Catmull came up with the concept of the alpha channel, which made parts of digital images transparent so that they could be composited on top of each other into a single picture. That breakthrough—which eventually won Smith one of his two Oscars—became so fundamental to modern imagery of all sorts that it’s startling to realize that someone had to invent it.

Though Schure’s dream of becoming the next Walt Disney went unrealized, Smith and Catmull found a new patron in George Lucas and continued their collaboration at Lucasfilm’s computer graphics group. In 1986, their efforts got spun out into an independent hardware and software startup bankrolled by Steve Jobs. That was Pixar—and though Smith found it impossible to work productively with Apple’s cofounder, he stuck around until the company won acclaim for its short cartoons and had brokered the Disney deal that resulted in Toy Story, the world’s first feature-length computer-animated film.

In the excerpts below from an appearance Smith made at the Boston Computer Society on March 28, 1990, he talks about the alpha channel, a Pixar-produced Life Savers commercial, and the special effects in the James Cameron movie The Abyss—which were produced using Pixar software—with BCS president Jonathan Rotenberg and other attendees. With Toy Story still five years in the future, Rotenberg asks if any more shorts like Luxo Jr.Tin Toy, and Knick Knack are in the works; Smith says he can’t promise that Pixar will ever again make anything “just for the heck of it.”

After leaving Pixar in 1991, Smith cofounded another startup, Altamira, which created an innovative image editor and was acquired by Microsoft, resulting in Smith becoming that company’s first graphics fellow. Most recently, he spent a decade researching and writing A Biography of the Pixel. I spoke with him via Zoom, and began by asking him to explain why he decided to produce what he calls “a canon for digital light.”

Can you talk a little bit about why it matters what a pixel is, particularly to those of us who are not computer scientists?

Well, if you think about it, all pictures are digital now. We’re Zooming right now via pixels. And in fact, because of the digital explosion, nearly all the pictures that have ever existed are digital. You have to go to museums or kindergartens to find those old analog bits.

Alvy Ray Smith is out to change how you think about pixels | DeviceDaily.com

In 1984 at Lucasfilm, Smith directed The Adventures of André & Wally B., the first cartoon from the organization later known as Pixar.

[Photo: Pixar]

We’re aswim as a race in zetapixels, I estimate—that’s 21 zeros. Isn’t it weird that none of us know what they are? It just doesn’t seem right. Daily experience is mediated via pixels. It’s pretty easy to understand why people, laypeople in particular, don’t know. But I think a lot of my colleagues don’t know, to tell you the truth. It’s because it boils down to what looks like pretty hairy math, Fourier and sampling theory.

In the book, using no math, I just strip it down. What did Fourier do? Well, he told us that everything was music. You just add up a bunch of sound waves, of different frequencies and amplitudes, and you get music. And oh, by the way, you get all the sound, and oh, by the way, you get all the pictures too. You can add up regular corrugations of different frequencies and amplitudes to get a picture of your child.

Although all of us in science and technology know about Fourier, hardly any of us know about him. He almost got his head cut off in the French Revolution. It was only saved because Robespierre lost his head instead. And then Fourier went off with his new buddy Napoleon Bonaparte to Egypt, and then was on the expedition that discovered the Rosetta stone. His story is marvelous.

He came up with Fourier wave theory, which is what we all use today. And then on top of that, there’s this next theorem, called the sampling theorem, that builds on the back of Fourier’s idea, to give us a sample. And we call that sample, in pictures, a pixel. But [it’s] a sample, by definition, that exists only at a point. You can’t see a pixel. It has zero dimensions. So one of the main confusions in the world is that the sample—which you can’t see—is confused and conflated with a little glowing spot on your display, which you can see.

Pixels are discrete, separated, choppy, discontinuous things. Display elements on your screen are soft, analog, overlapping, contiguous things.

To what degree is that definition of pixels unique to your outlook? If I talk to other people who have been involved with computer graphics as long as you have, would they all say exactly the same thing?

No, I think they’ll stumble all over the place, because it’s never been elucidated clearly. What I’m offering here is a set of definitions that fit known quantities very well and should stick. I’m thinking of it as a canon for digital light. These are the definitions. This is how it works. Let’s get it right, starting now, and quit fumbling around.

This book was a 10-year journey for you. Did you set out to write the book you eventually wrote, and did you expect it to be quite as wide-ranging as it turned out?

No, I didn’t know what I was doing. I got started because a fellow named Sean Cubitt, a media arts professor in Melbourne, invited me down to give a talk on—this is the term he used—a taxonomy of digital light. I thought, Whoa, I’m not sure what that means. But I liked the term digital light. I’d been looking for some words that captured what it is that I thought I was doing that “computer graphics” didn’t capture. Where does image processing fit in, for example? Where do paint programs fit in? As soon as I saw the term digital light, I said, “That’s it.” And in fact, that was the original name of my book.

I wrote this taxonomy where I explained to these media professor types what a pixel was and all of a sudden I realized they were hungry for it. And I realized this is some missing, basic information from the world. And I knew that I was probably one of a few people in the world who knows it inside out, from the image-processing side, paint program side, geometry side, and so forth. Why don’t I turn this into a book?

The interesting close on that story was that 10 years later when I had the book finished—this surprising book finished—I started looking around for somebody to publish it. And I said, “Wait a minute, Sean Cubitt is an editor at MIT Press. Go talk to him!” Well, he snapped it up.

I knew I wanted to tell the full story, but I didn’t know how many false paths I would be led down. I was shocked again and again at how wrong the received wisdom is in the stories of high technology.

I tell the story of the sampling theorem. We’re all taught that Claude Shannon did that. No, he didn’t! He never even claimed he did it. It was this Russian communist. We couldn’t admit to that, but it’s a fabulous story. And my chapter two is about Vladimir Kotelnikov, this amazing, amazing man who, so far as I can tell, proved the sampling theorem in 1933 in Russia [that we currently use]. And in the last picture we have of him, [Vladimir] Putin’s got his arm around him, knighting him in the Kremlin, on the 70th anniversary of the proof of that theorem.

Another revelation for me, although maybe it should have been obvious, is that the earliest computer scientists basically looked at imagery as a distraction and maybe even a little frivolous. Were they being short-sighted, or did it take a different kind of vision to realize that potential?

It was one of the remarkable discoveries to me to find out how frivolous people thought pictures were in the beginning. Computing atom bomb calculations was the idea of nonfrivolous.

Baby, the first computer, had pixels. That was one of my astonishing discoveries. I went looking for the pictures from Baby, and all the old engineers said, “Even if we made pictures, we wouldn’t have showed you. We would’ve been wasting this precious resource, making pictures.”

But that notion changed pretty rapidly. It didn’t take until the ’60s. Already in the ’50s, there were people making games. The first electronic game I’ve chased down was in ’51 or ’52. And there was the Whirlwind machine at MIT where they were actually making pictures just for pictures’ sake. The first [computer] animation was [shown] on the See It Now television show, Edward R. Murrow’s show, in ’51. People were already starting to get it.

Unsung heroes

You certainly give pioneers like Claude Shannon and Ivan Sutherland their due. But at the same time, you make clear that they didn’t do everything, and that there were lesser-known people who were very important. Was that kind of a balancing act?

I tried just to be as honest as I could be—just as straight as I could be with the facts supporting it. And if somebody came down a notch, then they came down a notch.

Ivan helped me a lot with the book. He and I Skyped for an hour and a half. He was kind of surprised when I came up with the fact that Sketchpad wasn’t 3D. It was 2D. It was his office mate who did Sketchpad III, not Ivan. Nobody’s heard of this guy. I said, “Ivan, you didn’t do 3D. You still come across as a major man in this book, there’s no question about it. You were a big deal. But the facts are that your two office mates did 3D in perspective, not you.”

I got in touch with Tim Johnson, the guy who did Sketchpad III, and I brought all this up. I said, “Isn’t it weird that nobody knows about you?” And he says, “Well, what do you do? It’s been the story of my whole life.”

Poor Tim Johnson looks just like Ivan Sutherland. If you go out right now to the internet and look up pictures of Ivan Sutherland and Sketchpad, the majority of the pictures that you’ll get are of Tim Johnson at Sketchpad III, because they look alike.

 

I demoted Ivan. He’s just a member of a triumvirate now. He and Tim Johnson and their third office mate, Larry Roberts, actually were the movers and shakers at MIT in the early ’60s. I called Larry Roberts before he died recently. I talked to him about how he had been kind of left out. There are no awards in computer graphics named for Tim Johnson or Larry Roberts. He said, “Alvy, I get rewarded so highly for being a father of the internet that I couldn’t care less that people in computer graphics don’t know about me.” I said, “Well, I’m going to try to change that anyhow, because the form of perspective you taught us is the one that I implemented at Lucasfilm and Pixar. It’s key to how we do 3D even today. It’s just wrong that you don’t get credit for that.”

Do you have any other particularly favorite unsung heroes in the book?

Alex Schure I think is unsung. He is such a strange person. In a lot of ways, he’s the most exciting guy in our story, the story of Pixar, and the least known, and the guy who lost everything because of us. His kids ended up taking his university away from him, the New York Institute of Technology, because they thought he had wasted the school’s money on us and got nothing in return. Which pretty much is accurate.

He came to me one day and said, “Alvy, we’ve got the best computer graphics in the world, don’t we?” And I said, ‘Yeah, we do.’”

I love to talk about Alex Schure, but it’s really hard. The way you and I are talking right now is the usual human, conversational mode. That’s not how Alex Schure worked. Alex Schure would just walk in the room. You never knew when—at 4 in the morning, noon, 5 at night. You never know when he’s going to come into the lab, as we called it. And he would just be talking, we called it word salad or Casey Stengel-speak.

I didn’t know what to do. And then I finally said, “Well, I’m just going to start talking too and see what happens. And after a while, I noticed that my words had somehow been worked into his stream of words. “Okay, I think the thought’s have transferred. I don’t know how, but somehow it’s been transmitted.”

He came to me one day and said, “Alvy, we’ve got the best computer graphics in the world, don’t we?” And I said, “Yeah, we do.” And he said, “What do we need to do to stay ahead of the world?” And I said, “Well, you know, this 8-bit frame buffer you bought us—512 by 512 pixels, 8 bits per pixel, $80,000 in 1970s dollars? Give two more of those, and I can gang them together and make a 24-bit frame buffer.”

I thought I was explaining to him the difference between 256 colors versus 16 million colors. I didn’t know whether I succeeded or not. Well, several weeks later, he drops by again and says, “You know something, Alvy? I bought you five more of those 8-bit thingies so you’d have two of those 24-bit thingies.” Well, we had the first 24-bit pixels in the world. It just threw us out in advance of the world and we never, ever looked back. And he did it just on my say-so. In today’s dollars, he spent about $2 million on on that piece of memory. I was too naive to be as astonished as I should have been.

All of a sudden, we had more memory than anybody else in the world. I laugh at this, because we had a fraction of everybody’s cellphone screen, and we thought it was heaven. We eventually had eighteen 8-bit frame buffers that we cobbled together in various ways.

I met Ed Catmull at New York Tech. Ed and I invented the alpha channel one night. Why? Well, we had so much memory lying around that it was nothing for us to make the leap of, “Oh, let’s add a fourth channel.” Everybody else was struggling to even have one channel or three channels. We had 18 channels. So we just added a fourth channel and overnight came up with the alpha channel. It was a very profound contribution. To tell you the truth, it took me decades to appreciate how profound that idea was. That story’s in the book.

I tell the stories of who invented the movies. It’s not who you think. It’s not Edison. It’s not Eadweard Muybridge. It’s not the Lumiere brothers. Those are the three that people always guess when I ask them. It’s a complex answer. But one of the most amazing characters is William Kennedy Laurie Dickson. This guy turns out to be the man who brought us the 35-millimeter film format. He worked for Edison. He and Edison had had a falling out. So he went and formed Biograph and was the first man to have the complete system: camera, film, projector. That’s a movie system; until you have that, you don’t have one.

Who is he? Well, it turns out he’s a fabulous story. And I wrote his genealogy, scholarly genealogy, using all of my skills that I’ve developed over the years. It’s on my Digital Light web page.

Tyrants through history

One of your recurring themes is that the movies were really not just Thomas Edison. Animation, or even Disney animation, was not just Walt. Pixar was certainly not just Steve Jobs. Why do people tend to gravitate toward giving almost 100% of that credit to a handful of people?

I think we humans are suckers for a good narrative. We just want it to be a simple narrative where one genius hero is responsible for it all. I think maybe it’s the stories we grew up on as kids. We just love those stories and they’re hardly ever right.

The role of the tyrant is one that kind of surprised me. I’ve had my tyrant: Steve Jobs. But he played a role. He didn’t know he was playing the role that he played, and he did what he did for all the wrong reasons. But the bottom line was, he came through with the money [for Pixar] when 45 other outfits turned us down—45. Ed and I, we pitched like crazy, and all the VCs turned us down. And General Motors and Philips turned us down, as far as corporations are concerned. H. Ross Perot, when he had his split with General Motors, we got thrown out at the same time he got thrown out.

The tyrant’s role turns out to be to create a safe space for the creators to do something, the tyrants often not knowing that’s what’s happened.”

We just came up with this Hail Mary move: “Let’s call Steve.” He had already been kicked out at Apple and he had had us come down to his mansion, and he proposed that he would buy us from [Lucasfilm]. And we were like, “No, we want to run our own company, but we’ll accept your money.” He agreed, but his offer was half of what General Motors was offering. And Lucasfilm essentially laughed him out of the office.

So when we were desperate, 45 funding failures later, we said, “Let’s call Steve and say, ‘Just make exactly the same offer again at half the valuation. We think Lucasfilm is just so sick of this process that they’re willing to compromise, and just get anything they can get at this point.’”

And that’s how we got Steve as our venture capitalist, which was a horror show for me personally. But the fact is, that money was what kept Pixar alive for five years while we waited for Moore’s Law to catch up—finally!—and bring us Disney, who did pay for the movies, not Jobs. His brilliant move was to take us public on nothing except the promise of [Toy Story]. And he made a billion dollars from that amazingly great business gamble.

Alvy Ray Smith is out to change how you think about pixels | DeviceDaily.com

With 1995’s Toy Story, Pixar realized cofounders Smith and Catmull’s long-held goal of producing the first computer-animated feature film.

[Photo: Disney/Pixar]

I came up with a whole list of tyrants through the book. There’s a tyrant for almost every field. It’s not that it has to be that way, but it frequently is that way. Napoleon was a tyrant, for example. And I had a chapter on TV I had to throw out because the book was too long, where [RCA’s David] Sarnoff was the tyrant.

The tyrant’s role turns out to be to create a safe space for the creators to do something, the tyrants often not knowing that’s what’s happened. Steve didn’t know what we were doing. He had no notion of movies. He was a hardware guy, and he would have sold us at any moment for $50 million just to make himself not embarrassed.

You say in the book that Jobs, essentially, kept writing checks because it would have been hard on his ego to admit failure.

He could not sustain the loss. He would tear Ed and me a new one, but he would write a check to take away our equity. He didn’t buy Pixar. He funded a spinout, but over the course of several years, writing more checks and taking away equity eventually did buy the company completely from the employees. We ran out of money again and again. We would have been dead in Silicon Valley if we had any other investor but this crazy egomaniac.

Moore’s Law marches on

If Ed Catmull and you and George Lucas and John Lasseter and Steve Jobs had never lived, would the history of computer graphics as entertainment be radically different? Or would PDI or some of the other great people have done all the things you did?

It’d be different, of course, but it would have happened. Moore’s Law is probably the unsung hero of my book. Wait a minute—I sing its praises, but it’s this awesome thing, and I can’t overstate its awesomeness. An order of magnitude every five years—we can’t wrap our puny little human brains around that. We just have to ride that wave and see where it takes us.

[Moore’s Law] was one when I made my first computer graphics picture in ’65 and it’s sitting at a hundred billion now and it’s going to hit a trillion in just a few years. What’s that mean? People are always asking,”What’s it mean?” I say, “That’s the point. You can’t know what it means until you get there.” The guys designing the chips and making Moore’s Law continue; they can’t tell you how they’re going to get to the next step until they get to the last step. It’s a conceptual limit on the human imagination. Or sometimes I say, “And if you can look one order of magnitude ahead, you can probably become a billionaire.” Very few people can do that.

At the end of the book, you talk about a bit about AI and extended reality and mixed reality. These things are still relatively immature. Is there a chance that the next 60 years of the pixel will be as eventful as the first 60 years?

Boy, that’s a big question. If Moore’s Law keeps going then the answer is definitely yes. Will it keep going? Who knows. In my lifetime, the death of Moore’s Law has been announced four or five times, and the engineers blow past it every time. I’ve heard about the upcoming death of Moore’s Law, rumblings about that, a lot recently. And just a few couple of months ago IBM announced a 2-nanometer technology. It’s still happening—2 nanometers. I don’t even have the beginnings of how to think about that. So for the next 10 years or so, I think the curve is to keep going. Maybe at a different rate.

I’m an adviser for a VR startup company in Silicon Valley called Baobab Studios. That entrepreneurial spirit pulls out the best in us human beings. It’s also the scariest time of your life. You’re trying to start a company and keep all of those families fed, and you just don’t know if you’re going to make it or not. I like to be surrounded by young people doing that, where it’s not on my shoulders anymore. I just get to sit back and say, “Nice job, kids.”

Back in the Lucasfilm days, some of my geniuses, like Loren Carpenter, came up with the number of 80 million polygons per frame. [That was] the complexity of a picture frame that would probably keep audiences happy, and none of our software should break if you throw 80 million polygons at it. I’m talking about fully rendered, shaded, and shadowed, and so forth. But we were willing to take 30 hours to complete one of those frames. Well, Unreal Engine is doing that in real time today. I know that falls right out of Moore’s Law, but damn, it’s hard for me to wrap my brain around it.

There have been four orders of magnitude of Moore’s Law since the year 2000, when my book officially closes. This mixed reality is a difficult problem. You have a 3D world inside your computer, like we always had computer graphics. And at the same time, you’re extracting a 3D model of the world you’re physically, actually in, and combining those two and trying to solve the hidden-surface problem, the translucency problem, shadowing, all of that from these two different—but now interacting—databases. That’s hard. I think it’s going to keep a generation of SIGGRAPH researchers busy.


This interview has been edited for length and clarity. 1990 Boston Computer Society video courtesy of The Powersharing Series, a collection of 134 recordings of computer pioneers available for purchase on a USB drive. Copyright © Powersharing Inc. and used with permission. All rights reserved.

Fast Company , Read Full Story

(41)