Writing

Food Fight!

As reported here and here, Apple has updated the language in the latest release of their iPhone/iPad developer tools to explicitly disallow development with other tools and languages:

3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).

I’m happy that Apple is being explicit about this sort of thing, rather than their previous passive aggressive stance that gave more wiggle room for their apologists. This is a big “screw you” to Adobe in particular, who had been planning to release a Flash-to-iPhone converter with Creative Suite 5. I understand why they’re doing it, but in the broader scheme of what’s at stake, why pick a fight with one of the largest software vendors for the Mac?

In addition to being grounded in total, obsessive control over the platform, the argument seems to be that the only way to make a proper iPhone/iPad experience is to build things with their tools, as a way to prevent people from developing for multiple platforms at once. This has two benefits: first, it encourages developers to think within the constraints and affordances of the platform, and second, it forces potential developers to make a choice of which platform they’re going to support. It’s not quite doubling the amount of work that would go into creating an app for both, say, the iPhone and Android, but it’s fairly close. So what will people develop for? The current winner with all the marketing and free hype from the press.

To be clear, developing within the constraints of a platform is incredibly important for getting an application right. But using Apple’s sanctioned tools doesn’t guarantee that, and using a legal document to enforce said tools steps into the ridiculous.

Fundamentally, I think the first argument — that to create a decent application you have to develop a certain way, with one set of tools — is bogus. It’s a lack of trust in your developers and even moreso, a distrust of the market. In the early days of the Macintosh, it was difficult to get companies to rework their DOS (or even Apple II) applications to use the now-familiar menu bars and icons. The Human Interface Guidelines addressed it specifically. And when companies ignored those warnings, and released software that was a clear port from a DOS equivalent, people got upset and the software got trashed. Just search for the phrase “not mac-like” and you’ll get the picture. Point being, people came around on developing for the Mac, and it didn’t require a legal document saying that developers had to use MPW and ResEdit.

The market demanded software that felt like Macintosh applications, and it’s the same for the iPhone and iPad. On the tools side, the free choice also meant that the market produced far better tools than what Apple provided — instead of the archaic MPW (ironically, itself something of a terminal application), Think Pascal, Lightspeed C, Metrowerks Codewarrior, and even Resorcerer all filled in various gaps at different times, all providing a better platform than (or at least a suitable alternative to) Apple’s tools.

But like this earlier post, it seems like Apple is being run by someone who is re-fighting battles of the 80s and 90s, but whose personal penchant for control prevents him from learning from the outcomes. That rhyming sound you hear? It’s history.

Friday, April 9, 2010 | cs, languages, mobile, software  

On needing approval for what we create, and losing control over how it’s distributed

I’ve been trying to organize my thoughts about the iPad and the direction that Apple is taking computing along with it. It’s really an extension of the way they look at the iPhone, which I found unsettling at the time but with the iPad, we’re all finally coming around to the idea that they really, really mean it.

I want to build software for this thing. I’m really excited about the idea of a touch-screen computing platform that’s available for general use from a known brand who has successfully marketed unfamiliar devices to a wide audience. (Compare this to, say, Microsoft’s Tablet PC push that began in the mid-2000s and is… nowhere?)

It represents an incredible opportunity, but I can’t get excited about it because of Apple’s attempt to control who creates for it, and what they can create for it. Their policy of being the sole distributor of applications, and even worse, requiring approval on all applications, is insulting to developers. Even the people who have created Mac software for years are being told they can no longer be trusted.

I find it offensive on a very basic level, because I know that if such restrictions were in place when I was first learning to write software — mostly on Apple machines, no less — I would not have a career in the field. Or if we had to pay regular fees to become a developer, use only Apple-provided tools, and could release only approved software through an Apple store, things like the Processing project would not have happened. I can definitively say that any success that I’ve had has come from the ability to create what I want, the way that I want, and to be able to distribute it as I see fit, usually over the internet.

As background, I’m writing this as a long-time Apple user who started with an Apple ][+ and later the original 128K Mac. A couple months ago, Apple even profiled my work here.

You’ll shoot your eye out, kid!

There’s simply no reason to prevent people from installing anything they want on the iPad. The same goes for the iPhone. When the iPhone appeared, Steve Jobs made a ridiculous claim that a rogue application could “take down the network.” That’s an insult to common sense (if it were true, then the networks have a serious, serious design flaw). It’s also an insult on our intelligence, except for the Apple fans who repeat this ridiculous statement.

But even if you believed the Bruce Willis movie version of how mobile networks are set up, it simply does not hold true for the iPad (and the iPod Touch before it). The $499 iPad that has no data network hardware is not in danger of “taking down” anyone’s cell network, but applications will still be required to go through the app store and therefore, its approval process.

The irony is that the original Mac was almost a failure because of Jobs’ insistence at the time about how closed the machine must be. I recall reading about how the original Macintosh was almost nearly a failure, were it not for engineers who developed AppleTalk networking in spite of Steve Jobs’ insistence of keeping the original Macintosh as an island unto itself. Networking helped make the “Macintosh Office” possible by connecting a series of Macs to the laser printer (introduced at the same time), and so followed the desktop publishing revolution of the mid-80s. Until that point, the 128K Macintosh was largely a $2500 novelty.

For the amazing number of lessons that Jobs seems to have learned in his many years in technology, his insistence of control is to me a glaring omission. It’s sad that Jobs groks the idea of computers designed for humans, but then consistently slides into unnecessary lockdown restrictions. It’s an all-too-human failing of wanting too much control.

Only available on the Crapp Store!

For all the control that Apple’s taken over the content on the App Store, it hasn’t prevented the garbage. Applications for jiggling boobs or shaking babies have somehow first made it through the same process that delayed the release or update of many other developers’ applications for weeks. Some have been removed, but only after an online uproar of keyboards and pitchforks. The same approval process that OKs flashlight apps by the dozen and fart apps.

Obvious instances aside, the line of “appropriate” will always be subjective. The line changed last week when Apple decided to remove 5,000 “overtly sexual” applications, which might make sense, but is instead hypocritical when they don’t apply the same criteria to established names like Playboy.

Somebody’s forgetting the historical mess of “I know it when I see it.” It’s an un-answerable dilemma (or is that an enigma?), so why place yourself in a situation of being arbiter?

Another banned application was a version of Dope Wars, a game that dates back to the mid-80s. Inappropriate? Maybe. A problem? Only if children have been turning to lives of crime since its early days as an MS-DOS console program, or on their programmable TI calculators. Perhaps the faux-realistic interface style of the iPhone OS tipped the scales.

The problem is that fundamentally, it’s just never going to be possible to prevent the garbage. If you want to have a boutique, like the Apple retail stores, where you can buy a specially selected subset of merchandise from third parties, then great. But instead, we’ve conflated wanting to have that kind of retail control (a smart idea) with the only conduit by which software can be sold for the platform (an already flawed idea).

Your toaster doesn’t need a hierarchical file system

Anyone who has spent five minutes helping someone with their computer will know that the overwhelming majority don’t need full access to the file system, and that it’s a no-brainer to begin hiding as much of it as possible. The idea of the ipad as appliance (and the iPhone before it) is an obvious, much needed step in the user interface of computing devices.

(Of course, the hobbyist in me doesn’t want that version, since I still want access to everything, but most people I know who don’t spend all their time geeking out on the computer have no use for the confusion. I’m happy to separate those parts.)

And frankly, it’s an obvious direction, and it’s actually much closer to very early versions of Mac OS — the original System and Finder — than it is with OS X. Mac OS X is as complicated as Windows. My father, also an early Mac user, began using PCs as Apple fell apart in the late 90s. He hasn’t returned to the Mac largely because of the learning curve for OS X, which is no longer head and shoulders above Windows in terms of its ease of use. Surely the overall UI is better, clearer, and more thoughtfully put together. But the reason to switch nowadays is less to do with the UI, and more to do with the way that one can lose control of their Windows machines due to the garbage installed by PC vendors, the required virus scanning software, the malware scanning software, and all the malware that gets through in spite of it all.

The amazing Steven Frank, co-founder of Panic, puts things in terms of Old World and New World computing. But I believe he’s mixing the issue of the device feeling (and working) in a more appliance-like fashion with the issue of who controls what goes on the device, and how it’s distributed to the device. I’m comfortable with the idea that we don’t need access to the file system, and it doesn’t need to feel like a “computer.” I’m not comfortable with people being prevented by a licensing agreement, or worse, sued, for hacking the device to work that way.

It Just Works, except when It Doesn’t

The “it just works” mantra often credited to Apple is — to borrow the careful elocution of Steve Jobs — “bullshit.” To use an example, if things “just worked” then I’d be able to copy music from my iPod back to my laptop, or from one machine that I own to another. If I’ve paid for that music (whether it’s DRM-free or even if I made the MP3 myself), there’s simply no reason that I should be restricted from copying this way. Instead we have the assumption that I’m doing something illegal built into the software, and preventing obvious use.

Of course, I assume that as implemented, this feature is something that was “required” by the music industry. But to my knowledge, there’s simply no proof of that. No such statement has been made, and more likely, it’s easier for Apple fans to use the “evil music industry” or “evil RIAA” as easier to blame. This thinking avoids noticing that Apple has also demanded similar restrictions for others’ projects, in a case where they actually have control over such matters.

Bottom line, when trying to save the music collection of a family member whose laptop has crashed is a great time, and it’s only made better by taking the time to dig up a piece of freeware that will let me copy the music from their iPod back to their now blank machine. The music that they spent so much money on at the iTunes Store.

Like “don’t be evil,” the “it just works” phrase applies, or it doesn’t. Let’s not keep repeating the mantra and conveniently ignoring the times when the opposite is true.

It’s been a long couple of months, and it’s only getting longer

One of the dumbest things that I’ve seen in the past two months since the iPad announcement is articles that write about the device comparing it to other computers, and how it doesn’t have feature x, y, or z. That’s silly to me because it’s not a general purpose computer like we’re used to. And yes, I’m fully aware of the irony of that statement if you take it too literally. I am in fact complaining about what’s missing from the iPad (and iPhone), though it’s about things that have been removed or disallowed for reasons of control, and don’t actually improve the experience of using the device. Now stop thinking so literally.

The thing that will be interesting about the iPad is the experience of using it — something that nobody has had except for the folks at Apple — and as is always the case when dealing with a different type of interface, you’re always going to be wrong.

So what is it? I’m glad you asked…

Who is this for?

As Teri likes to point out, it’s also important to note the appeal of this device to a different audience — our parents. They need something like an iPhone, with a bigger screen, that allows them to browse the internet and read lots of email and answer a few. (No word yet on whether the iPad will have the ability to forward YouTube videos, chain e-mails, or internet jokes.) For them, “it’s just a big iPhone” is a selling point. The point is not that the iPad is for old people, the point is that it’s a new device category that will find its way into interesting niches that we can’t ascertain until we play with the thing.

Any time you have a new device, such as this one, it also doesn’t make a lot of sense. It simply doesn’t fit with anything that we’re currently used to. So we have a lot of lazy tech writers who go on about how it’s under-featured (it’s a small computer! it’s a big phone!) or that it doesn’t make sense in the lineup. This is a combination of a lack of creativity (rather than tearing the thing down, think about how it might be used!) and perhaps the interest of filling column inches in spite of the fact that none of these people has used the device, so we simply don’t know. It’s part of what’s so dumb about pre-game shows for sports. What could be more boring than a bunch of people arguing about what might happen? The only thing that’s interesting about the game is what does happen (and how it happens). I know you’ve got to write something, but man, it’s gonna been a long couple weeks until the device arrives.

It’s Perfect! I love it like it is.

There’s also talk about the potential disappearance of extensions or plug-in applications. While Mac OS extensions (of OS 9 and earlier) were a significant reason for crashes on older machines, they also contributed to the success of the platform. Those extensions wouldn’t be installed if there weren’t a reason, and the fact is, they were valuable enough that it was the occasional sobs for an hour of lost work after a system crash to have them present.

I think the anti-extension arguments come from people who are imagining the ridiculous number of extensions on others’ machines, but disregarding the fact that they badly needed something like Suitcase to handle the number of fonts on their system. As time goes on, people will want to do a wider range of things with the iPhone/iPad OS too. The original Finder and System had a version 3 too (actually they skipped 3.0, but nevermind that), just like the iPhone. Go check that out, and now compare it to OS X. The iPhone OS will get crapped up soon enough. Just as installing more than 2-3 pages of apps on the iPhone breaks down the UI (using search is not the answer — that’s the equivalent of giving up in UI design), I’m curious to see what the oft-rumored multitasking support in iPhone OS 4 will do for things.

And besides, without things like Windowshade, what UI elements could be licensed (or stolen) and incorporated into the OS. Ahem.

I’d never bet against people who tinker, and neither should Apple.

I haven’t even covered issues from the hardware side, in spite of having grown up taking apart electronics and in awe of the Heathkit stereo my dad built. But it’s the sort of thing that disturbs our friends at MAKE, and others have written about similar issues. Peter Kirn has more on just how bad the device is in terms of openness. One of the most egregious hardware problems is the device’s connection to the outside world is a proprietary port, access to which has to be licensed from Apple. This isn’t just a departure from the Apple ][ days of having actual digital and analog ports on the back (it was like an Arduino! but slower…) it’s not even something more standard like USB.

But why would you artificially keep this audience away? To make a couple extra percent on licensing fees? How sustainable is that? Sure it’s a tiny fraction of users, but it’s some of the most important — the people who are going to do new and interesting things with your platform, and take it in new directions. Just like the engineers who sneaked networking into the original Macintosh, or who built entire industries around extending the Apple ][ to do new things. Aside from the schools, these were the people who kept that hardware relevant long enough for Apple to screw up the Lisa and Mac projects for a few years while they got their bearings.

Enough…

I am not a futurist, but at the end of it all, I’m pretty disappointed by where things seem to be heading. I spend a lot of effort on making things, and trying to get others to make things, and having someone in charge of what I make, and how I distribute it is incredibly grating. And the fact that they’re having this much success with it is saddening.

It may even just work.

Friday, March 12, 2010 | cs, mobile, notafuturist, software  

JavaScript: The Good Parts

Watched Douglas Crockford’s “JavaScript: The Good Parts” talk, based on his book of the same name. I like Crockford’s work on JSON—or rather, the idea of simple file formats that need simple APIs to work with them. More important, with the continued evolution of processing.js, I’m really optimistic about where things are headed with JavaScript. (You might say I’m feeling a bit hopey changey about it.) I’ve had Crockford’s book in my reading pile for a while and finally got around to watching the talk last week.

I was at Netscape (or maybe at Sun?) when they renamed their “LiveScript” language to “JavaScript” (because Java was the it-language at the time) and I’d avoided it for a long time. His talk points out a series of things to avoid from the JavaScript syntax, in fact I think I enjoyed the explanation of the “Bad Parts” a bit more. By clearing out a few things, the whole starts making more sense. But it’s an interesting discussion for people scratching their head about this incredibly pervasive language found in web browsers, and rapidly becoming more exciting as support for Canvas and WebGL evolve.

Tuesday, February 23, 2010 | cs, languages, processing, speaky  

Dick Brass

An interesting op-ed by Dick Brass, a former Vice President at Microsoft on how their internal structure can get in the way of innovation, and citing specific examples. The first relates to ClearType and the difficulties of getting it integrated into other products:

Although we built it to help sell e-books, it gave Microsoft a huge potential advantage for every device with a screen. But it also annoyed other Microsoft groups that felt threatened by our success.

Engineers in the Windows group falsely claimed it made the display go haywire when certain colors were used. The head of Office products said it was fuzzy and gave him headaches. The vice president for pocket devices was blunter: he’d support ClearType and use it, but only if I transferred the program and the programmers to his control. As a result, even though it received much public praise, internal promotion and patents, a decade passed before a fully operational version of ClearType finally made it into Windows.

Or another case in attempts to build the Tablet PC, in stark contrast to Apple’s (obvious and necessary) redesign of iWork for their upcoming iPad:

Another example: When we were building the tablet PC in 2001, the vice president in charge of Office at the time decided he didn’t like the concept. The tablet required a stylus, and he much preferred keyboards to pens and thought our efforts doomed. To guarantee they were, he refused to modify the popular Office applications to work properly with the tablet. So if you wanted to enter a number into a spreadsheet or correct a word in an e-mail message, you had to write it in a special pop-up box, which then transferred the information to Office. Annoying, clumsy and slow.

Having spent time in engineering meetings where similar arguments were made, it’s interesting to see how that perspective translates into actual outcomes. ClearType has seemingly crawled its way to a modest success (though arguably was invented much earlier with Apple ][ displays), while Microsoft’s Tablet efforts remain a failure. But neither represent he common sense approach that has had such an influence on Apple’s success.

Update: A shockingly bad official response has been posted to Microsoft’s corporate blog. While I took the original article to be one person’s perspective, the lame retort (inline smiley face and all) does more to reinforce Brass’ argument.

Thursday, February 4, 2010 | cs, failure, software  

Barbara Liskov wins Turing Award

What a brilliant woman:

Liskov, the first U.S. woman to earn a PhD in computer science, was recognized for helping make software more reliable, consistent and resistant to errors and hacking. She is only the second woman to receive the honor, which carries a $250,000 purse and is often described as the “Nobel Prize in computing.”

I’m embarrassed to admit that I wasn’t more familiar with her work prior to reading about it in Tuesday’s Globe, but wow:

The latter day Ada herselfLiskov’s early innovations in software design have been the basis of every important programming language since 1975, including Ada, C++, Java and C#.

Liskov’s most significant impact stems from her influential contributions to the use of data abstraction, a valuable method for organizing complex programs. She was a leader in demonstrating how data abstraction could be used to make software easier to construct, modify and maintain…

In another contribution, Liskov designed CLU, an object-oriented programming language incorporating clusters to provide coherent, systematic handling of abstract data types. She and her colleagues at MIT subsequently developed efficient CLU compiler implementations on several different machines, an important step in demonstrating the practicality of her ideas. Data abstraction is now a generally accepted fundamental method of software engineering that focuses on data rather than processes.

This has nothing to do with gender, of course, but I find it exciting apropos of this earlier post regarding women in computer science.

Thursday, March 12, 2009 | cs, gender  

What has driven women out of Computer Science?

1116-sbn-webdigi-crop.gifCasey yesterday noted this article from the New York Times on the declining number of women who are pursuing computer science degrees. Declining as in “wow, weren’t the numbers too low already?” From the article’s introduction:

ELLEN SPERTUS, a graduate student at M.I.T., wondered why the computer camp she had attended as a girl had a boy-girl ratio of six to one. And why were only 20 percent of computer science undergraduates at M.I.T. female? She published a 124-page paper, “Why Are There So Few Female Computer Scientists?”, that catalogued different cultural biases that discouraged girls and women from pursuing a career in the field. The year was 1991.

Computer science has changed considerably since then. Now, there are even fewer women entering the field. Why this is so remains a matter of dispute.

The article goes on to explain that even though there is far better gender parity (since 1991) when looking at roles in technical fields, computer science still stands alone in moving backwards.

The text also covers some of the “do it with gaming!” nonsense. As someone who became interested in programming because I didn’t like games, I’ve never understood why gaming was pushed as a cure-all for disinterest in programming:

Such students who choose not to pursue their interest may have been introduced to computer science too late. The younger, the better, Ms. Margolis says. Games would offer considerable promise, except that they have been tried and have failed to have an effect on steeply declining female enrollment.

But I couldn’t agree more with the sentiment with regard to age. I know of two all-girls schools (Miss Porter’s in Connecticut and Nightingale-Bamford in New York) who have used Processing in courses with high school and middle school students, and I couldn’t be more excited about it. Let’s hope there are more.

Tuesday, November 18, 2008 | cs, gender, reading  

Human Computation (or “Mechanical Turk” meets “Family Feud”)

richard_dawson.jpgComputers are really good at repetitive work. You can ask a computer to multiply two numbers together seven billion times and not only will it not complain, it’ll probably have seven billion answers for you a few seconds later. Ask a person to do the same thing and they’ll either walk away at the outset, realizing the ridiculousness of the task, or they’ll get through the first few tries and lose interest. But even the fact that a human can recognize the ridiculousness of the task is important. Humans are good at lots of things—like identifying a face in a crowd—that cannot be addressed by computation with the same level of accuracy.

Visualization is about the interface between what humans are good at, and what computers are good at. First, the computer can crunch all seven billion numbers, then present the results in a way that we can use our own perceptual skills to identify what’s important or interesting. (This is also why the design of a visualization is a fundamentally human task, and not something to be left to automation.)

This is also the subject of Luis von Ahn’s work at Carnegie Mellon. You’re probably familiar with CAPTCHA images—usually wavy numbers and letters that you have to discern when signing up for a webmail account or buying tickets from Ticketmaster. The acronym stands for “Completely Automated Public Turing Test to Tell Computers and Humans Apart,” a clever mouthful referring to Alan Turing’s work in discerning man or machine. (I encourage you to read about them, but this is already getting long so I won’t get into it here.)

More interesting than CAPTCHA, however, is the whole notion that’s behind it: that it’s an example of relying on humans to do what they’re best at, though it’s a task that’s difficult for computers. (Sure, in recent weeks, people have actually found ways to “break” CAPTCHAs in specific cases, but that’s not important here.) For instance, the work was extended to the Google Image Labeler, described as follows:

You’ll be randomly paired with a partner who’s online and using the feature. Over a two-minute period, you and your partner will:

  • View the same set of images.
  • Provide as many labels as possible to describe each image you see.
  • Receive points when your label matches your partner’s label. The number of points will depend on how specific your label is.
  • See more images until time runs out.

Prior to this, most image labeling systems had to do with getting volunteers to name or tag images individually. As you can imagine, the quality of tags suffer considerably because of everything from differences in how people perceive or describe what they see, to individuals who try to be a little too clever in choosing tags. With the Image Labeler game, that’s turned around backwards, where there is a motivation to use tags that match the other person, thus minimizing the previous problems. (It’s “Mechanical Turk” meets “Family Feud”.) They’ve also applied the same ideas to scanning books—where fragments of text that cannot be recognized by software are instead checked by multiple people.

More recently, von Ahn’s group has expanded these ideas in Games With A Purpose, a site that addresses these “casual games” more directly. The new site is covered in this New Scientist article, which offers additional tidbits (perspective? background? couldn’t think of the right word).

You can also watch Luis’ Google Tech Talk about Human Computation, which if I’m not mistaken, led to the Image Labeler project.

(We met Luis a couple times while at CMU and watched the Superbowl with his awesome fiancée Laura, cheering on her hometown Chicago Bears against those villainous Colts. We were happy when he received a MacArthur Fellowship for his work—just the sort of person you’d like to get such an award that highlights people who often don’t quite fit in their field.)

Mommy can we play infringing on my civil liberties?Returning to the earlier argument, algorithms to identify a face in a crowd are certainly improving. But without a significant breakthrough, their usefulness will be significantly limited. One commonly hyped use for such systems is airport security. Bruce Schneier explains the problem:

Suppose this magically effective face-recognition software is 99.99 percent accurate. That is, if someone is a terrorist, there is a 99.99 percent chance that the software indicates “terrorist,” and if someone is not a terrorist, there is a 99.99 percent chance that the software indicates “non-terrorist.” Assume that one in ten million flyers, on average, is a terrorist. Is the software any good?

No. The software will generate 1000 false alarms for every one real terrorist. And every false alarm still means that all the security people go through all of their security procedures. Because the population of non-terrorists is so much larger than the number of terrorists, the test is useless. This result is counterintuitive and surprising, but it is correct. The false alarms in this kind of system render it mostly useless. It’s “The Boy Who Cried Wolf” increased 1000-fold.

Given the number of travelers at Boston Logan in 2006, that would be two “terrorists” identified per day. (And with Schneier’s one in ten million is a terrorist figure, that would be two or three terrorists per year…clearly too generous, which makes the face detection accuracy even worse than how he describes it.) I find myself thinking about the 99.99% accuracy number as I stare at the back of heads lined up at the airport security checkpoint—itself a human problem, not a computational problem.

Thursday, May 15, 2008 | cs, games, human, perception, security  
Book

Visualizing Data Book CoverVisualizing Data is my 2007 book about computational information design. It covers the path from raw data to how we understand it, detailing how to begin with a set of numbers and produce images or software that lets you view and interact with information. When first published, it was the only book(s) for people who wanted to learn how to actually build a data visualization in code.

The text was published by O’Reilly in December 2007 and can be found at Amazon and elsewhere. Amazon also has an edition for the Kindle, for people who aren’t into the dead tree thing. (Proceeds from Amazon links found on this page are used to pay my web hosting bill.)

Examples for the book can be found here.

The book covers ideas found in my Ph.D. dissertation, which is the basis for Chapter 1. The next chapter is an extremely brief introduction to Processing, which is used for the examples. Next is (chapter 3) is a simple mapping project to place data points on a map of the United States. Of course, the idea is not that lots of people want to visualize data for each of 50 states. Instead, it’s a jumping off point for learning how to lay out data spatially.

The chapters that follow cover six more projects, such as salary vs. performance (Chapter 5), zipdecode (Chapter 6), followed by more advanced topics dealing with trees, treemaps, hierarchies, and recursion (Chapter 7), plus graphs and networks (Chapter 8).

This site is used for follow-up code and writing about related topics.