Writing
writing | ben fry
Writing

This here is a ghost town

This blog was created in 2008 and hasn’t been actively updated for several years.

For more recent work, please visit the Fathom site, where you can see current projects, or read the latest updates on what we’ve been up to.

For Processing Foundation work, please visit the site or check out our page on Github.

Tuesday, April 14, 2015 | site  

And speaking of height…

Another wonderful example, more powerful as words than as an image:

Jan Pen, a Dutch economist who died last year, came up with a striking way to picture inequality. Imagine people’s height being proportional to their income, so that someone with an average income is of average height. Now imagine that the entire adult population of America is walking past you in a single hour, in ascending order of income.

The first passers-by, the owners of loss-making businesses, are invisible: their heads are below ground. Then come the jobless and the working poor, who are midgets. After half an hour the strollers are still only waist-high, since America’s median income is only half the mean. It takes nearly 45 minutes before normal-sized people appear. But then, in the final minutes, giants thunder by. With six minutes to go they are 12 feet tall. When the 400 highest earners walk by, right at the end, each is more than two miles tall.

(From The Economist, by way of Eva)

Tuesday, February 1, 2011 | finance, scale  

The importance of showing numbers in context

An info graphic from the Boston Globe:

measuring in shaq inches

Monday, January 31, 2011 | basketball, scale, sports  

Come work with us in Boston

Fathom Information Design is looking for developers and designers. Come join us!

We’re looking for people to join us at Fathom. For all the positions, you’ll be creating work like you see on fathom.info, plus more mobile projects (Android, iOS, JavaScript) and the occasional installation piece. If you’re a developer, design skills are a plus. Or if you’re a designer, same goes for coding.

  • Developer – Looking for someone with a strong background in Java, and some C/C++ as well. On Monday this person would be sorting out more advanced aspects of a client project. On Tuesday they would hone the Processing Development Environment, mercilessly crushing bugs. On Wednesday they would refactor critical visualization tools used by brilliant scientists. On Thursday they could put out a fire in another client project without breaking a sweat, and on the fifth day, they would choose what we’re having for Beer Friday. This messiah also might not mind being referred to in the third person.
  • Web Developer – In 1996, I used Java for my Computer Graphics 2 homework at Carnegie Mellon. I’ll never forget the look on the face of my professor Paul Heckbert (Graphics Gems IV, Pixar, and now Gigapan — a man who wrote an actual ray tracer in C code that fit on the back of a business card), when he asked me during office hours why this was a good idea. Your professor did the same thing when you told him (or her) that you’d be implementing your final project with JavaScript and Canvas. We need amazing things to happen with HTML, CSS, and JavaScript, and you’re the person to do it.
  • Junior Designer – You’ve finished your undergrad design program and feel the need to make beautiful things. Your commute is spent fixing the typography in dreadful subway ads (only in your head, please). You are capable of pixel-level detail work to get mobile apps or a web site just right. And if we’re lucky, you’re so good with color that you’ve been mistaken for an impressionist painter.
  • Senior Designer – So all that stuff above that the Junior Designer candidate thinks they can do? You can actually do it. And more important, you have the patience and humility to teach it to others around you. You’re also an asset on group projects, best friends with developers, and adored by clients.

At the moment, we’re only looking for people located in (or willing to relocate to) the Boston area.

Please send résumé or CV, links to relevant work, and cover letter to inquire (at) fathom (dot) info. Please do not write us individually, as that may void your contest entry.

Monday, January 17, 2011 | opportunities  

Minnesota, meet Physics

The roof of the Metrodome springs a leak following heavy snow in Minnesota:

I’ve been looking at too many particle and fluid dynamics simulations because it looks fake to me — more like a simulation created by the structural engineers of what would happen if the roof were to collapse — rather than thousands of pounds of honest-to-goodness midwestern snow pummeling the turf seemingly in slow motion. Beautiful.

And another version from a local FOX affiliate in Minnesota:

Sunday, December 12, 2010 | physical, simulation, water  

The growth of the Processing project

Number of Processing users, every four weeks, since 2005:

humbling and terrifying

Long version: this is a tally of the number of unique users who run the Processing environment every four weeks, as measured by the number of machines checking for updates.

Of note:

  • In spite of the frequently proclaimed “death of Java” or “death of Java on the desktop,” we’re continuing to grow. This isn’t to say that Java on the desktop is undead, but this frustrating contradiction presents a considerable challenge for us… I’ll write more about that soon.
  • There’s a considerable (even comical) dip each January, when people decide that the holidays and drinking with their family is more fun than coding (or maybe that’s only my household). Things also tail off during the summer into August. These two trends are amplified due to the number of academic users, however other data I’ve seen (web traffic, etc) suggests that the rest of the world actually operates on something like the academic calendar as well.

About the data:

  • This is a very conservative estimate of the number of Processing users out there. Our software is free — we don’t have a lot to gain by inflating the numbers.
  • This covers only unique users — we don’t double count the same person in each 4-week period. Otherwise our numbers would be much higher.
  • This is not downloads, which are also significantly higher.
  • This is every four weeks, not every month. Unless there are 13 months in a year. Wait, how many months are in a year?
  • This only covers people who are using the actual Processing Development Environment — no Eclipse users, etc.
  • Use of processing.js or spinoff projects are not included.
  • This doesn’t include anyone who has disabled checking for updates.
  • This doesn’t include anyone not connected to the net.
  • The unique ID is stored in the preferences.txt file, so if a single login is used on a machine, that’s counting multiple people. Conversely, if you have multiple machines, you’ll be counted more than once.
  • Showing the data by day, week, or year all show the same overall trend.

This is a pretty lame visualization of the numbers, and I’m not even showing other interesting tidbits like what OS, version, and so on are in use. Maybe we can release the data if we can figure out an appropriate way to do so.

Tuesday, November 2, 2010 | processing  

Processing + Eclipse

Exciting news! The short story is that there’s a new Processing Plug-in for Eclipse, and you can learn about it here.

twins!

The long story is that Chris Lonnen contacted me in the spring about applying for the Google Summer of Code (SoC) program, which I promptly missed the deadline for. But we eventually managed to put him to work anyway, via Fathom (our own SoC army of one, with Chris working from afar in western New York) with the task of working on a new editor that we can use to replace the current Processing Development Environment (the PDE).

After some initial work and scoping things out, we settled on the Eclipse RCP as the platform, with the task of first making a plug-in that works in the Eclipse environment (everything in Eclipse is a plug-in), which could then eventually become its own standalone editor to replace the current PDE.

Things are currently incomplete (again, see the Wiki page for more details), but give it a shot, file bugs (tag with Component-Eclipse when filing), and help lend Chris a hand in developing it further. Or if you have questions, be sure to use the forum. Come to think of it, might be time for a new forum section…

Tuesday, October 19, 2010 | processing  

When you spend your life doing news graphics…

…like Karl Gude has, then parking lots start to look like this:

179527385-500px

Tuesday, October 19, 2010 | mapping, news, perception  

Ever feel like there’s just a tiny curtain protecting your privacy online?

This piece from Niklas Roy made me laugh out loud:

Built with Processing and AVR-GCC.

(Thanks to Golan, who pointed out this link.)

Monday, October 18, 2010 | laughinglikeanidiotatyourcomputer, processing  

Already checked it in Photoshop, so you don’t have to

I wasn’t going to post this one, but I can’t get it out of my head. In the image below, the squares marked A and B are the same shade of gray.

prepare to have your mind blown. what's that? it already was?

The image is from Edward H. Adelson at MIT, and you can find my original source here. More details (proof, etc) on Adelson’s site here, which includes this explanation:

The visual system needs to determine the color of objects in the world. In this case the problem is to determine the gray shade of the checks on the floor. Just measuring the light coming from a surface (the luminance) is not enough: a cast shadow will dim a surface, so that a white surface in shadow may be reflecting less light than a black surface in full light. The visual system uses several tricks to determine where the shadows are and how to compensate for them, in order to determine the shade of gray “paint” that belongs to the surface.

The first trick is based on local contrast. In shadow or not, a check that is lighter than its neighboring checks is probably lighter than average, and vice versa. In the figure, the light check in shadow is surrounded by darker checks. Thus, even though the check is physically dark, it is light when compared to its neighbors. The dark checks outside the shadow, conversely, are surrounded by lighter checks, so they look dark by comparison.

A second trick is based on the fact that shadows often have soft edges, while paint boundaries (like the checks) often have sharp edges. The visual system tends to ignore gradual changes in light level, so that it can determine the color of the surfaces without being misled by shadows. In this figure, the shadow looks like a shadow, both because it is fuzzy and because the shadow casting object is visible.

The “paintness” of the checks is aided by the form of the “X-junctions” formed by 4 abutting checks. This type of junction is usually a signal that all the edges should be interpreted as changes in surface color rather than in terms of shadows or lighting.

As with many so-called illusions, this effect really demonstrates the success rather than the failure of the visual system. The visual system is not very good at being a physical light meter, but that is not its purpose. The important task is to break the image information down into meaningful components, and thereby perceive the nature of the objects in view.

(Like the earlier illusion post, this one’s also from my mother-in-law, who should apparently be writing this blog instead of its current—woefully negligent—author.)

Sunday, October 17, 2010 | perception, science  

Processing 0191 for Android

Casey and I are in Chicago this weekend for the Processing+Android conference at UIC, organized by Daniel Sauter. In our excitement over the event, we posted revision 0191 last night (we tried to post from the back of Daniel’s old red Volvo, but Sprint’s network took exception). The release includes several Android-related updates, mostly fixed from Andres Colubri to improve how 3D works. Get the download here:

http://processing.org/download/ (under pre-releases)

Also be sure to keep an eye on the Wiki for Android updates:
http://wiki.processing.org/w/Android

(By the time you read this, there may be newer pre-releases like 0192, or 0193, and so on. Use those instead.)

Release notes for the 0191 update follow. And we’ll be doing a more final release (1.3 or 2.0, depending) once things settle a bit.

Processing Revision 0191 – 30 September 2010

Bug fix release. Contains major fixes to 3D for Android.

[ changes ]

+ Added option to preferences panel to enable/disable smoothing of text inside the editor.

+ Added more anti-aliasing to the Linux interface. Things were downright ugly in places where defaults different from Windows and Mac OS X.

[ bug fixes ]

+ Fix a problem with Linux permissions in the download.
http://code.google.com/p/processing/issues/detail?id=343

+ Fix ‘redo’ command to follow various OS conventions.
http://code.google.com/p/processing/issues/detail?id=363
Linux: ctrl-shift-z, macosx cmd-shift-z, windows ctrl-y
http://en.wikipedia.org/wiki/Table_of_keyboard_shortcuts
http://developer.apple.com/mac/library/documentation/

+ Remove extraneous console messages on export.

+ When exporting, don’t include a library multiple times.

+ Fixed a problem where no spaces in the size() command caused an error.
http://code.google.com/p/processing/issues/detail?id=390

[ andres 1, android 0 ]

+ Implemented offscreen operations in A3D when FBO extension is not available
http://code.google.com/p/processing/issues/detail?id=300

+ Get OpenGL matrices in A3D when GL_OES_matrix_get extension is not available
http://code.google.com/p/processing/issues/detail?id=286

+ Implemented calculateModelviewInverse() in A3D
http://code.google.com/p/processing/issues/detail?id=287

+ Automatic clear/noClear() switch in A3D
http://code.google.com/p/processing/issues/detail?id=289

+ Fix camera issues in A3D
http://code.google.com/p/processing/issues/detail?id=367

+ Major fixes for type to work properly in 3D (fixes KineticType)
http://code.google.com/p/processing/issues/detail?id=358

+ Lighting and materials testing in A3D
http://code.google.com/p/processing/issues/detail?id=294

+ Generate mipmaps when the GL_OES_generate_mipmaps extension is not available.
http://code.google.com/p/processing/issues/detail?id=288

+ Finish screen pixels/texture operations in A3D
http://code.google.com/p/processing/issues/detail?id=298

+ Fixed a bug in the camera handling. This was a quite urgent issue, since affected pretty much everything. It went unnoticed until now because the math error canceled out with the default camera settings.
http://forum.processing.org/topic/possible-3d-bug

+ Also finished the implementation of the getImpl() method in PImage, so it initializes the texture of the new image in A3D mode. This makes the CubicVR example to work fine.

[ core ]

+ Fix background(PImage) for OpenGL
http://code.google.com/p/processing/issues/detail?id=336

+ Skip null entries with trim(String[])

+ Fix NaN with PVector.angleBetween
http://code.google.com/p/processing/issues/detail?id=340

+ Fix missing getFloat() method in XML library

+ Make sure that paths are created with saveStream(). (saveStream() wasn’t working when intermediate directories didn’t exist)

+ Make createWriter() use an 8k buffer by default.

Friday, October 1, 2010 | processing  

Matthew Carter wins a MacArthur

I’m really happy to see typographer Matthew Carter receive a well-deserved MacArthur “Genius” Grant. A short video:

Very well put:

I think they’re saying to me, “You’ve done all this work. Well done… Here’s an award, now do more. Do better.” And it’s very nice, at my age, to be told by someone, that “we expect more from you. And here’s the means to help you achieve that.”

And if you’re not familiar with Carter’s name, you know his work: he created both Verdana and Georgia, at least one of which will be found on nearly any web site (the text you’re reading now is Georgia). Microsoft’s commission of these web fonts helped improve design on the web significantly in the mid-to-late 90s. Carter also developed several other important typefaces like Bell Centennial (back in the 70s), the tiny text found in phone books.

Tuesday, September 28, 2010 | typography  

Awesome now travels by poster tube

A few weeks ago I received a note from Ed Fries, who was interested in a distellamap-style print of his recently-finished Halo 2600.

Halo? Like the Xbox game by Bungie?

Why, yes! Sure enough, he’s written a version of the game for he Atari 2600.

You can play the game here, and if you don’t drown in the awesome (or die from laughing), you can now purchase prints here. Like the other distellamap prints, it shows how the image and code data coexist and interact inside an Atari 2600 cartridge games:

with all new colors!

A detail of what it looks like up close:

grab the key!

(And as with the other prints, proceeds are given to charity.)

Saturday, September 4, 2010 | distellamap, prints  

That wasn’t all he lost on his trip to Tiny

Trying to open an SVG with Illustrator, and she tells me this sad story…

shoulda listened to mom instead of the guys

Have a good Friday, everyone.

Friday, September 3, 2010 | thisneedsfixed  

Conveying multiple realities in research and journalism

A recent Boston Globe editorial covers the issue of multiple, seemingly (if obviously) contradictory statements that come from complex research, in this case around the oil spill:

Last week, Woods Hole researchers reported a 22-mile-long underwater plume that they mapped out in the Gulf of Mexico in June — a finding indicating that much more oil may lie deep underwater and be degrading so slowly that it might affect the ecosystem for some time. Also last week, University of Georgia researchers estimated up to 80 percent of the spill may still be at large, with University of South Florida researchers finding poisoned plankton between 900 feet and 3,300 feet deep. This differed from the Aug. 4 proclamation by Administrator Jane Lubchenco of the National Oceanic and Atmospheric Administration that three-quarters of the oil was “completely gone’’ or dispersed and the remaining quarter was “degrading rapidly.’’

But then comes the Lawrence Berkeley National Laboratory, which this week said a previously unclassified species of microbes is wolfing down the oil with amazing speed. This means that all the scientists could be right, with massive plumes being decimated these past two months by an unexpected cleanup crew from the deep.

This is often the case for anything remotely complex: the opacity of the research process to the general public, the communication skills of various institutions, the differing perspective between what the public cares about (whose fault is it? how bad is it?) versus the interests of the researchers, and so on.

It’s a basic issue around communicating complex ideas, and therefore affects visualization too — it’s rare that there’s a single answer.

sadness

On a more subjective note, I don’t know if I agree with the premise of the editorial is that it’s on the government to sort out the mess for the public. It’s certainly a role of the government, though the sniping at the Obama administration makes the editorial writer sound one who is equally likely to bemoan government spending, size, etc. But I could write an equally (perhaps more) compelling editorial making the point that it’s actually the role of newspapers like the Globe to sort out newsworthy issues that concern the public. But sadly, the Globe, or at least the front page of boston.com, has been overly obsessed with more click-ready topics like the Craigslist killer (or any other rapist, murderer, or stomach-turning story involving children du jour) and playing “gotcha” with spending and taxes for universities and public officials. What a bunch of ghouls.

(Thanks to my mother-in-law for the article link.)

Wednesday, September 1, 2010 | government, news, reading, science  

Scientific identification, ordering, & quantification of awesome

There may be many versions of the Periodic Table, but this is my favorite.

it's more fun to be a 10-year-old boy than a crusty old academic

The image was created by The Dapperstache, who has since updated the graphic, but I prefer this version with its bevel-crazy gradient awful.

Saturday, August 21, 2010 | infographics  

Fathom

Ben Fry LLC now has a proper name, and it is Fathom. Or if you want to be formal about it, “Fathom Information Design”.

And today we launched a new site, fathom.info, for our work. (I’ll still be using benfry.com for my older research projects, Processing updates, software and visualization ramblings, book updates…)

We also have a new project that launched yesterday with GE, this time looking at shifts in age within world populations. A little more info about it is on the Fathom updates page (some might call it a blog). And when we have a chance, we hope to post a bit more of the process behind the piece.

Friday, July 23, 2010 | fathom  

Processing 0187

New release available shortly in the pre-releases section of processing.org/download.

More bug fixes, and one new treat for OS X users. Hopefully we’re about set
to call this one 1.2. Please test and report any issues you find.

[ additions ]

+ On Mac OS X, you’re no longer required to have a sketch window open at
all times. This will make the application feel more Mac-like–a little
more elegant and trendy and smug with superiority.

+ Added a warning to the Linux version to tell users that they should be
using the official version of Java from Sun if they’re not.
http://wiki.processing.org/w/Supported_Platforms#Linux
There isn’t a perfect way to detect whether Sun Java is in use,
so please let us know how it works or if you have a better idea.

[ fixes ]

+ “Unexpected token” error when creating classes with recent pre-releases.
http://code.google.com/p/processing/issues/detail?id=292

+ Prevent horizontal scroll offset from disappearing.
Thanks to Christian Thiemann for the fix.
http://code.google.com/p/processing/issues/detail?id=280
http://code.google.com/p/processing/issues/detail?id=10

+ Fix NullPointerException when making a new sketch on non-English systems.
http://code.google.com/p/processing/issues/detail?id=283

+ Fixed a problem when using command-line arguments with exported sketches
on Windows. Thanks to davbol for the fix.
http://code.google.com/p/processing/issues/detail?id=303

+ Added requestFocusInWindow() call to replace Apple’s broken requestFocus(),
which should return the previous behavior of sketches getting focus
immediately when loaded in a web browser.
http://code.google.com/p/processing/issues/detail?id=279

+ Add getDocumentBase() version of createInput() for Internet Explorer.
Without this, sketches will crash when trying to find files on a web server
that are not in the exported .jar file. This fix is only for IE. Yay IE!

Monday, July 12, 2010 | processing  

Processing 0186

Mixed bag of updates as a follow-on to release 0185.

[ mixed bag ]

Android SDK requirement is now API 7 (Android 2.1), because Google has deprecated API 6 (2.0.1).

More Linux PDF fixes from Matthias Breuer. Thanks!

PDF library matrix not reset between frames. (Fixed in 0185.)
http://dev.processing.org/bugs/show_bug.cgi?id=1227

Updated the URLs opened by the software to reflect the new site layout.
http://code.google.com/p/processing/issues/detail?id=278

Updated the included examples with recent changes.

Friday, June 25, 2010 | processing  

Processing 0185

Just posted release 0185 of Processing on the download page. It’s a pre-release for what will eventually become 1.2 or 1.5. Please test and file bugs if you find problems. The list revisions are below:

PROCESSING 0185 – 20 June 2010

Primarily a bug fix release. The biggest change are a couple tweaks for problems caused by Apple’s Update 2 for Java on OS X, so this should make Processing usable on Macs again.

[ bug fixes ]

+ Fix for Apple bug that caused an assertion failure when requestFocus() was called in some situations. This was causing the PDE to become unusable for opening sketches, and focus highlighting was no longer happening.
http://code.google.com/p/processing/issues/detail?id=258
http://dev.processing.org/bugs/show_bug.cgi?id=1564
http://dev.processing.org/bugs/show_bug.cgi?id=1569

+ Fixed two bugs with fonts created with specific charsets.

+ Fix from jdf for PImage(java.awt.Image img) and ARGB images. The method “public PImage(java.awt.Image)” was setting the format to RGB (even if ARGB)

+ Large number of beginShape(POINTS) not rendering correctly on first frame
http://dev.processing.org/bugs/show_bug.cgi?id=1572

+ Fix for PDF library and createFont() on Linux, thanks to Matthias Breuer.
http://dev.processing.org/bugs/show_bug.cgi?id=1566

+ Fix from takachin for a problem with full-width space with Japanese IME.
http://dev.processing.org/bugs/show_bug.cgi?id=1531

+ Reset matrix for the PDF library in-between frames also added begin/endDraw between frames
http://dev.processing.org/bugs/show_bug.cgi?id=1227

[ additions ]

+ Add the changes for “Copy as HTML” to replace the “Copy for Discourse” function, now that we’ve shut down the old YaBB discourse board.
http://code.google.com/p/processing/issues/detail?id=271

+ Option to disable re-opening sketches when you start Processing. The default will stay the same, but if you don’t like the feature, alter your preferences.txt file to change:
last.sketch.restore=true
to the following:
last.sketch.restore=false
The issue was originally filed here:
http://dev.processing.org/bugs/show_bug.cgi?id=1501
http://code.google.com/p/processing/issues/detail?id=245
However the main problem with this is that due to other errors, the wrong sketches are being opened, sketches are sometimes forgotten, or windows are opened concurrently on top of one another, creating a bad situation:
http://code.google.com/p/processing/issues/detail?id=177
http://code.google.com/p/processing/issues/detail?id=179
Those bugs are not yet fixed, but will be addressed in future releases.

+ Option to change the default naming of sketches via preferences.txt.
First, you can change the prefix, which defaults to:
editor.untitled.prefix=sketch_
And the suffix is handled using dates. The current default (since 1.0) is:
editor.untitled.suffix=MMMdd
Or if you want to switch back to the old (six digit) style, you could use:
editor.untitled.suffix=yyMMdd
http://dev.processing.org/bugs/show_bug.cgi?id=1091

+ Updated bundled JRE/tools to 6u20 for Windows and Linux

+ Several SVG fixes and additions, including some tweaks from PhiLho. These changes will be documented in a future release once the API changes are complete.

+ Added option to launch a sketch directly w/ linux. Thanks to Larry Kyrala.
http://dev.processing.org/bugs/show_bug.cgi?id=1549

+ Pass actual exceptions from InvocationTargetException in registered methods, which improves how exceptions are reported with libraries.

+ Added loading.gif to the js version of the applet loader. Not sure if this is actually working or not, but it’s there.

[ android ]

+ Added permissions for INTERNET and WRITE_EXTERNAL_STORAGE to the default AndroidManifest.xml file. This will be addressed in greater detail here:
http://code.google.com/p/processing/issues/detail?id=275
And with the implementation of code signing here:
http://code.google.com/p/processing/issues/detail?id=222

+ Lots of work happening underneath with regards to Android, more updates soon as things start evening out a bit.

+ Defaulting to a WVGA screen for the default Processing AVD.

Monday, June 21, 2010 | processing  

The Pleasures of Imagination

A wonderful article by Yale professor Paul Bloom on imagination:

Our main leisure activity is, by a long shot, participating in experiences that we know are not real. When we are free to do whatever we want, we retreat to the imagination—to worlds created by others, as with books, movies, video games, and television (over four hours a day for the average American), or to worlds we ourselves create, as when daydreaming and fantasizing. While citizens of other countries might watch less television, studies in England and the rest of Europe find a similar obsession with the unreal.

Another portion talks about emotional response:

The emotions triggered by fiction are very real. When Charles Dickens wrote about the death of Little Nell in the 1840s, people wept—and I’m sure that the death of characters in J.K. Rowling’s Harry Potter series led to similar tears. (After her final book was published, Rowling appeared in interviews and told about the letters she got, not all of them from children, begging her to spare the lives of beloved characters such as Hagrid, Hermione, Ron, and, of course, Harry Potter himself.) A friend of mine told me that he can’t remember hating anyone the way he hated one of the characters in the movie Trainspotting, and there are many people who can’t bear to experience certain fictions because the emotions are too intense. I have my own difficulty with movies in which the suffering of the characters is too real, and many find it difficult to watch comedies that rely too heavily on embarrassment; the vicarious reaction to this is too unpleasant.

The essay is based on an excerpt of his book, How Pleasure Works: The New Science of Why We Like What We Like, which looks like a good read if I could clear out the rest of the books on my reading pile.

A reading pile that, of course, contains too little fiction.

Friday, June 4, 2010 | creativity  

Illusive

A terrific set of videos from the “Best Illusion of the Year” contest. Congratulations to all the finalists, in particular first prize winner Koukichi Sugihara whose video is below:

More from Kokichi Sugihara (including an explanation of how this works) can be found here.

(thanks to my mother-in-law, who sent the link)

Saturday, May 22, 2010 | perception, science  

The Evolution of Privacy on Facebook

Inspired by this post by Kurt Opsahl of the EFF, Matt McKeon of IBM’s Visual Communication Lab created the following visualization depicting the evolution of the default privacy settings on Facebook:

sorry, still don't have an account on fb

Has a couple nice visual touches that prevent it from looking like YAHSVPOQUFOTI (yet another highly-stylized visualization piece of questionable utility found on the internet). Also cool to see it was built with Processing.js.

Friday, May 7, 2010 | javascript, privacy, processing, refine, social  

Cake Versus Pie: A Scientific Approach

Allie Brosh, who appears to be some sort of genius, brings us definitive arguments in the cake versus pie debate. Best to read the entire treatise, but here are a few highlights on how clearly pie defeats cake:

Ability of enjoyment to be sustained over time

what am i doing?

Couldn’t agree more: it always seems like a good idea on the first bite, and then I catch myself. What am I doing? I hate cake. Another graphic:

Unequal frosting distribution is a problem

mommy says don't swear about your dessert

I grew up requesting pie for my birthday (strawberry rhubarb, thank you very much) instead of cake. This resonates. More importantly (for this site), Brosh cites the enormous impact of pie vs. cake for information design and visualization:

Pie is more scientifically versatile:

eat your heart out, tufte. no pun intended.

Again, you really should read the full post, or the rest of her site for that matter. Her piece on the Alot is alone worth the price of admission.

Friday, May 7, 2010 | infographics, represent  

Pinhole camera image of the Sun’s path

A beautiful image taken by a pinhole camera, showing the Sun’s path over six months:

times square curvey billboards, eat your heart out

From the explanation:

The picture clearly shows the path of the sun through the sky over the last six months. I believe you can see we didn’t have a great summer by the broken lines at the top. More sun shone in the month of October.

The post also links to a description of how to make your own.

Tuesday, April 13, 2010 | physical, science  

Food Fight!

As reported here and here, Apple has updated the language in the latest release of their iPhone/iPad developer tools to explicitly disallow development with other tools and languages:

3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).

I’m happy that Apple is being explicit about this sort of thing, rather than their previous passive aggressive stance that gave more wiggle room for their apologists. This is a big “screw you” to Adobe in particular, who had been planning to release a Flash-to-iPhone converter with Creative Suite 5. I understand why they’re doing it, but in the broader scheme of what’s at stake, why pick a fight with one of the largest software vendors for the Mac?

In addition to being grounded in total, obsessive control over the platform, the argument seems to be that the only way to make a proper iPhone/iPad experience is to build things with their tools, as a way to prevent people from developing for multiple platforms at once. This has two benefits: first, it encourages developers to think within the constraints and affordances of the platform, and second, it forces potential developers to make a choice of which platform they’re going to support. It’s not quite doubling the amount of work that would go into creating an app for both, say, the iPhone and Android, but it’s fairly close. So what will people develop for? The current winner with all the marketing and free hype from the press.

To be clear, developing within the constraints of a platform is incredibly important for getting an application right. But using Apple’s sanctioned tools doesn’t guarantee that, and using a legal document to enforce said tools steps into the ridiculous.

Fundamentally, I think the first argument — that to create a decent application you have to develop a certain way, with one set of tools — is bogus. It’s a lack of trust in your developers and even moreso, a distrust of the market. In the early days of the Macintosh, it was difficult to get companies to rework their DOS (or even Apple II) applications to use the now-familiar menu bars and icons. The Human Interface Guidelines addressed it specifically. And when companies ignored those warnings, and released software that was a clear port from a DOS equivalent, people got upset and the software got trashed. Just search for the phrase “not mac-like” and you’ll get the picture. Point being, people came around on developing for the Mac, and it didn’t require a legal document saying that developers had to use MPW and ResEdit.

The market demanded software that felt like Macintosh applications, and it’s the same for the iPhone and iPad. On the tools side, the free choice also meant that the market produced far better tools than what Apple provided — instead of the archaic MPW (ironically, itself something of a terminal application), Think Pascal, Lightspeed C, Metrowerks Codewarrior, and even Resorcerer all filled in various gaps at different times, all providing a better platform than (or at least a suitable alternative to) Apple’s tools.

But like this earlier post, it seems like Apple is being run by someone who is re-fighting battles of the 80s and 90s, but whose personal penchant for control prevents him from learning from the outcomes. That rhyming sound you hear? It’s history.

Friday, April 9, 2010 | cs, languages, mobile, software  

Cut! Cut! Paste. Cut!

Nice heat map image of how people use the menu bar in Firefox by Alex Faaborg:

copy! copy! paste! copy!

Most of the results are what you’d expect, but fun to see it nonetheless. Some other info graphics using the same data can be found here, and even better, the raw data can be found here.

Thursday, April 1, 2010 | data, heatmap, interact, inventory  

What this interminable conflict needs is a *mind map*

worse than boehner's health care diagram

What’s that?

It’s actually a map of counter-insurgency strategy for Afghanistan?

Oh.

Wednesday, March 31, 2010 | networks, news, politics, thisneedsfixed  

Controlled leaks and pre-announcements

This Wall Street Journal piece sounds a lot like a controlled leak:

Apple Inc. plans to begin producing this year a new iPhone that could allow U.S. phone carriers other than AT&T Inc. to sell the iconic gadget, said people briefed by the company.

The new iPhone would work on a type of wireless network called CDMA, these people said. CDMA is used by Verizon Wireless, AT&T’s main competitor, as well as Sprint Nextel Corp. and a handful of cellular operators in countries including South Korea and Japan. The vast majority of carriers world-wide, including AT&T, use another technology called GSM.

(Paranoid emphasis my own.) Apple (like any other major company) has been known to use leaks to their advantage, and there seems to be an uptick of next generation iPhone rumors (double-size screen, faster processor, thinner, Verizon) in the past week that seems to coincide with the announcement of several promising-sounding Android phones (big screens, fancy features, 4G and HSPA+ networks, thin, light, lots of providers). It doesn’t seem like Apple is terribly worried about Android, but aggressively keeping the Android platform from getting any sort of traction would makes good business sense.

I think this is the first time that I’ve seen such rumors appearing to coincide with Android launches (that you probably didn’t even hear about), which gave me some hope that Android might be going somewhere. (I use an iPhone and a Nexus One. I’m rooting for competition and better products more than either platform.)

Microsoft was always good at using pre-announcements to kill competitor’s products (“oh, I can wait a couple months for the Microsoft solution…”), which is of course different than just leaking. Microsoft often wouldn’t ship the product, or would ship a far inferior version to what was announced or leaked, but in the meantime, they had successfully screwed the competitor. I think it’s safe to assume that there will be a new iPhone (or two) in June like there have been the past several years.

And now, back to playing with data… rumors are clearly not my thing.

Tuesday, March 30, 2010 | mobile, rumors  

Yeah, that sounds about right…

More greatness from xkcd:

the no longer secret life of numbers

(Thanks Andrea)

Monday, March 29, 2010 | inventory  

A glimpse of modern reporting

Colin Raney turned me on to this project (podcast? article? info graphic? series? part of what’s great is that there isn’t really a good term for this) by the team of five running the Planet Money podcast for NPR. To explain toxic assets, they bought one, and are now tracking its demise:

losing $1000 isn't usually this elegant

Here I’m showing the info graphic, which is just one component of telling the broader story. The series does a great job of balancing 1) investigative journalism (an engaging story), 2) participation by a small team (the four reporters plus their producer each pooled $200 apiece), 3) timely and relevant, 4) really understanding an issue (toxic assets are in the news but we still don’t quite get it), 5) distribution (blog with updates, regular podcast), and 6) telling a story with information graphics (being able to track what’s happening with the asset).

I could keep adding to that numbered list, but my hastily and poorly worded point is that the idea is just right.

Perhaps if the papers weren’t so busy wringing their hands about the loss of classified ads, maybe this would have been the norm five years ago when it should have been. But it’s a great demonstration of where we need to be with online news, particularly as it’s consumed with all these $500 devices we keep purchasing, that deliver the news in a tiny, scrolly text format that echoes the print version. A print format that’s 100s of years old.

Anyhow, this is great. Cheers to the Planet Money folks.

(Another interesting perspective here, from TechDirt, which was the original link I read.)

Friday, March 26, 2010 | infographics, news  

Shirts with Zips

Got a note during SXSW from Marc Cull at CafePress to tell me that they were doing real-time order visualization using an adaptation of zipdecode (explained in Visualizing Data). Fun! Gave me a giggle, at any rate:

the united states, before being ironed or air-fluffed

Tuesday, March 23, 2010 | adaptation, zipdecode  

On needing approval for what we create, and losing control over how it’s distributed

I’ve been trying to organize my thoughts about the iPad and the direction that Apple is taking computing along with it. It’s really an extension of the way they look at the iPhone, which I found unsettling at the time but with the iPad, we’re all finally coming around to the idea that they really, really mean it.

I want to build software for this thing. I’m really excited about the idea of a touch-screen computing platform that’s available for general use from a known brand who has successfully marketed unfamiliar devices to a wide audience. (Compare this to, say, Microsoft’s Tablet PC push that began in the mid-2000s and is… nowhere?)

It represents an incredible opportunity, but I can’t get excited about it because of Apple’s attempt to control who creates for it, and what they can create for it. Their policy of being the sole distributor of applications, and even worse, requiring approval on all applications, is insulting to developers. Even the people who have created Mac software for years are being told they can no longer be trusted.

I find it offensive on a very basic level, because I know that if such restrictions were in place when I was first learning to write software — mostly on Apple machines, no less — I would not have a career in the field. Or if we had to pay regular fees to become a developer, use only Apple-provided tools, and could release only approved software through an Apple store, things like the Processing project would not have happened. I can definitively say that any success that I’ve had has come from the ability to create what I want, the way that I want, and to be able to distribute it as I see fit, usually over the internet.

As background, I’m writing this as a long-time Apple user who started with an Apple ][+ and later the original 128K Mac. A couple months ago, Apple even profiled my work here.

You’ll shoot your eye out, kid!

There’s simply no reason to prevent people from installing anything they want on the iPad. The same goes for the iPhone. When the iPhone appeared, Steve Jobs made a ridiculous claim that a rogue application could “take down the network.” That’s an insult to common sense (if it were true, then the networks have a serious, serious design flaw). It’s also an insult on our intelligence, except for the Apple fans who repeat this ridiculous statement.

But even if you believed the Bruce Willis movie version of how mobile networks are set up, it simply does not hold true for the iPad (and the iPod Touch before it). The $499 iPad that has no data network hardware is not in danger of “taking down” anyone’s cell network, but applications will still be required to go through the app store and therefore, its approval process.

The irony is that the original Mac was almost a failure because of Jobs’ insistence at the time about how closed the machine must be. I recall reading about how the original Macintosh was almost nearly a failure, were it not for engineers who developed AppleTalk networking in spite of Steve Jobs’ insistence of keeping the original Macintosh as an island unto itself. Networking helped make the “Macintosh Office” possible by connecting a series of Macs to the laser printer (introduced at the same time), and so followed the desktop publishing revolution of the mid-80s. Until that point, the 128K Macintosh was largely a $2500 novelty.

For the amazing number of lessons that Jobs seems to have learned in his many years in technology, his insistence of control is to me a glaring omission. It’s sad that Jobs groks the idea of computers designed for humans, but then consistently slides into unnecessary lockdown restrictions. It’s an all-too-human failing of wanting too much control.

Only available on the Crapp Store!

For all the control that Apple’s taken over the content on the App Store, it hasn’t prevented the garbage. Applications for jiggling boobs or shaking babies have somehow first made it through the same process that delayed the release or update of many other developers’ applications for weeks. Some have been removed, but only after an online uproar of keyboards and pitchforks. The same approval process that OKs flashlight apps by the dozen and fart apps.

Obvious instances aside, the line of “appropriate” will always be subjective. The line changed last week when Apple decided to remove 5,000 “overtly sexual” applications, which might make sense, but is instead hypocritical when they don’t apply the same criteria to established names like Playboy.

Somebody’s forgetting the historical mess of “I know it when I see it.” It’s an un-answerable dilemma (or is that an enigma?), so why place yourself in a situation of being arbiter?

Another banned application was a version of Dope Wars, a game that dates back to the mid-80s. Inappropriate? Maybe. A problem? Only if children have been turning to lives of crime since its early days as an MS-DOS console program, or on their programmable TI calculators. Perhaps the faux-realistic interface style of the iPhone OS tipped the scales.

The problem is that fundamentally, it’s just never going to be possible to prevent the garbage. If you want to have a boutique, like the Apple retail stores, where you can buy a specially selected subset of merchandise from third parties, then great. But instead, we’ve conflated wanting to have that kind of retail control (a smart idea) with the only conduit by which software can be sold for the platform (an already flawed idea).

Your toaster doesn’t need a hierarchical file system

Anyone who has spent five minutes helping someone with their computer will know that the overwhelming majority don’t need full access to the file system, and that it’s a no-brainer to begin hiding as much of it as possible. The idea of the ipad as appliance (and the iPhone before it) is an obvious, much needed step in the user interface of computing devices.

(Of course, the hobbyist in me doesn’t want that version, since I still want access to everything, but most people I know who don’t spend all their time geeking out on the computer have no use for the confusion. I’m happy to separate those parts.)

And frankly, it’s an obvious direction, and it’s actually much closer to very early versions of Mac OS — the original System and Finder — than it is with OS X. Mac OS X is as complicated as Windows. My father, also an early Mac user, began using PCs as Apple fell apart in the late 90s. He hasn’t returned to the Mac largely because of the learning curve for OS X, which is no longer head and shoulders above Windows in terms of its ease of use. Surely the overall UI is better, clearer, and more thoughtfully put together. But the reason to switch nowadays is less to do with the UI, and more to do with the way that one can lose control of their Windows machines due to the garbage installed by PC vendors, the required virus scanning software, the malware scanning software, and all the malware that gets through in spite of it all.

The amazing Steven Frank, co-founder of Panic, puts things in terms of Old World and New World computing. But I believe he’s mixing the issue of the device feeling (and working) in a more appliance-like fashion with the issue of who controls what goes on the device, and how it’s distributed to the device. I’m comfortable with the idea that we don’t need access to the file system, and it doesn’t need to feel like a “computer.” I’m not comfortable with people being prevented by a licensing agreement, or worse, sued, for hacking the device to work that way.

It Just Works, except when It Doesn’t

The “it just works” mantra often credited to Apple is — to borrow the careful elocution of Steve Jobs — “bullshit.” To use an example, if things “just worked” then I’d be able to copy music from my iPod back to my laptop, or from one machine that I own to another. If I’ve paid for that music (whether it’s DRM-free or even if I made the MP3 myself), there’s simply no reason that I should be restricted from copying this way. Instead we have the assumption that I’m doing something illegal built into the software, and preventing obvious use.

Of course, I assume that as implemented, this feature is something that was “required” by the music industry. But to my knowledge, there’s simply no proof of that. No such statement has been made, and more likely, it’s easier for Apple fans to use the “evil music industry” or “evil RIAA” as easier to blame. This thinking avoids noticing that Apple has also demanded similar restrictions for others’ projects, in a case where they actually have control over such matters.

Bottom line, when trying to save the music collection of a family member whose laptop has crashed is a great time, and it’s only made better by taking the time to dig up a piece of freeware that will let me copy the music from their iPod back to their now blank machine. The music that they spent so much money on at the iTunes Store.

Like “don’t be evil,” the “it just works” phrase applies, or it doesn’t. Let’s not keep repeating the mantra and conveniently ignoring the times when the opposite is true.

It’s been a long couple of months, and it’s only getting longer

One of the dumbest things that I’ve seen in the past two months since the iPad announcement is articles that write about the device comparing it to other computers, and how it doesn’t have feature x, y, or z. That’s silly to me because it’s not a general purpose computer like we’re used to. And yes, I’m fully aware of the irony of that statement if you take it too literally. I am in fact complaining about what’s missing from the iPad (and iPhone), though it’s about things that have been removed or disallowed for reasons of control, and don’t actually improve the experience of using the device. Now stop thinking so literally.

The thing that will be interesting about the iPad is the experience of using it — something that nobody has had except for the folks at Apple — and as is always the case when dealing with a different type of interface, you’re always going to be wrong.

So what is it? I’m glad you asked…

Who is this for?

As Teri likes to point out, it’s also important to note the appeal of this device to a different audience — our parents. They need something like an iPhone, with a bigger screen, that allows them to browse the internet and read lots of email and answer a few. (No word yet on whether the iPad will have the ability to forward YouTube videos, chain e-mails, or internet jokes.) For them, “it’s just a big iPhone” is a selling point. The point is not that the iPad is for old people, the point is that it’s a new device category that will find its way into interesting niches that we can’t ascertain until we play with the thing.

Any time you have a new device, such as this one, it also doesn’t make a lot of sense. It simply doesn’t fit with anything that we’re currently used to. So we have a lot of lazy tech writers who go on about how it’s under-featured (it’s a small computer! it’s a big phone!) or that it doesn’t make sense in the lineup. This is a combination of a lack of creativity (rather than tearing the thing down, think about how it might be used!) and perhaps the interest of filling column inches in spite of the fact that none of these people has used the device, so we simply don’t know. It’s part of what’s so dumb about pre-game shows for sports. What could be more boring than a bunch of people arguing about what might happen? The only thing that’s interesting about the game is what does happen (and how it happens). I know you’ve got to write something, but man, it’s gonna been a long couple weeks until the device arrives.

It’s Perfect! I love it like it is.

There’s also talk about the potential disappearance of extensions or plug-in applications. While Mac OS extensions (of OS 9 and earlier) were a significant reason for crashes on older machines, they also contributed to the success of the platform. Those extensions wouldn’t be installed if there weren’t a reason, and the fact is, they were valuable enough that it was the occasional sobs for an hour of lost work after a system crash to have them present.

I think the anti-extension arguments come from people who are imagining the ridiculous number of extensions on others’ machines, but disregarding the fact that they badly needed something like Suitcase to handle the number of fonts on their system. As time goes on, people will want to do a wider range of things with the iPhone/iPad OS too. The original Finder and System had a version 3 too (actually they skipped 3.0, but nevermind that), just like the iPhone. Go check that out, and now compare it to OS X. The iPhone OS will get crapped up soon enough. Just as installing more than 2-3 pages of apps on the iPhone breaks down the UI (using search is not the answer — that’s the equivalent of giving up in UI design), I’m curious to see what the oft-rumored multitasking support in iPhone OS 4 will do for things.

And besides, without things like Windowshade, what UI elements could be licensed (or stolen) and incorporated into the OS. Ahem.

I’d never bet against people who tinker, and neither should Apple.

I haven’t even covered issues from the hardware side, in spite of having grown up taking apart electronics and in awe of the Heathkit stereo my dad built. But it’s the sort of thing that disturbs our friends at MAKE, and others have written about similar issues. Peter Kirn has more on just how bad the device is in terms of openness. One of the most egregious hardware problems is the device’s connection to the outside world is a proprietary port, access to which has to be licensed from Apple. This isn’t just a departure from the Apple ][ days of having actual digital and analog ports on the back (it was like an Arduino! but slower…) it’s not even something more standard like USB.

But why would you artificially keep this audience away? To make a couple extra percent on licensing fees? How sustainable is that? Sure it’s a tiny fraction of users, but it’s some of the most important — the people who are going to do new and interesting things with your platform, and take it in new directions. Just like the engineers who sneaked networking into the original Macintosh, or who built entire industries around extending the Apple ][ to do new things. Aside from the schools, these were the people who kept that hardware relevant long enough for Apple to screw up the Lisa and Mac projects for a few years while they got their bearings.

Enough…

I am not a futurist, but at the end of it all, I’m pretty disappointed by where things seem to be heading. I spend a lot of effort on making things, and trying to get others to make things, and having someone in charge of what I make, and how I distribute it is incredibly grating. And the fact that they’re having this much success with it is saddening.

It may even just work.

Friday, March 12, 2010 | cs, mobile, notafuturist, software  

1995? Bah!

Newsweek has posted a 1995 article by Clifford Stoll slamming “The Internet.”

Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we’ll soon buy books and newspapers straight over the Intenet. Uh, sure.

Well, maybe Negroponte was wrong that we’d be buying newspapers. Ahem.

But the thing I find most amazing about the article, however, is that the all the examples that he cites as futuristic B.S. are in fact the successful parts. Take shopping:

Then there’s cyberbusiness. We’re promised instant catalog shopping—just point and click for great deals. We’ll order airline tickets over the network, make restaurant reservations and negotiate sales contracts. Stores will become obselete. So how come my local mall does more business in an afternoon than the entire Internet handles in a month? Even if there were a trustworthy way to send money over the Internet—which there isn’t—the network is missing a most essential ingredient of capitalism: salespeople.

He could have at least picked some of the dumber ideas about “the future” that were being pushed at the time, but instead he’s a shockingly accurate anti-futurist.

I’ll happily point out that in 1995 I couldn’t imagine buying clothes online either. In fact I remember having a conversation with Frank Ludolph (former Xerox PARC researcher, part of the Lisa team and worked on the Mac Finder as well, at Sun at the time) about exactly that. He said you had to be able to touch the clothes and get the color and texture — I concurred. Then again, Frank was also cheerfully embarrassed to admit (that same Summer) that he was one of the people (at PARC or Apple, I don’t recall) who argued against the idea of overlapping windows in user interfaces because they would be too confusing for users. Instead he (and many others in that camp) advocated that the screen be divided into a grid of panels.

It’s tough to be a futurist, but Stoll seems to have the market cornered on being an exactly wrong, and very entertaining, anti-futurist.

Monday, March 1, 2010 | notafuturist  

JavaScript: The Good Parts

Watched Douglas Crockford’s “JavaScript: The Good Parts” talk, based on his book of the same name. I like Crockford’s work on JSON—or rather, the idea of simple file formats that need simple APIs to work with them. More important, with the continued evolution of processing.js, I’m really optimistic about where things are headed with JavaScript. (You might say I’m feeling a bit hopey changey about it.) I’ve had Crockford’s book in my reading pile for a while and finally got around to watching the talk last week.

I was at Netscape (or maybe at Sun?) when they renamed their “LiveScript” language to “JavaScript” (because Java was the it-language at the time) and I’d avoided it for a long time. His talk points out a series of things to avoid from the JavaScript syntax, in fact I think I enjoyed the explanation of the “Bad Parts” a bit more. By clearing out a few things, the whole starts making more sense. But it’s an interesting discussion for people scratching their head about this incredibly pervasive language found in web browsers, and rapidly becoming more exciting as support for Canvas and WebGL evolve.

Tuesday, February 23, 2010 | cs, languages, processing, speaky  

Processing 0176 (pre-release)

PooI’ve just posted revision 0176 of Processing, a pre-release of what will become version 1.1 or maybe 1.5, depending on how long we bake this one before releasing the final. A list of changes can be found here.

You can download the release at android.processing.org, which (as you might guess) is the eventual home of the Android version of Processing. The Android support is very incomplete, as you can see from the warnings on the page.

But ignore for a moment that it says “Android”, the download is hosted there because at the moment, most of my energy is focused on the Android extensions. While the build also includes the incomplete Android tools (just pretend they aren’t there, unless you’re willing to read all the caveats on that page), there are many bug fixes for the regular Java version of Processing in the download too. It’s been a couple months since I’ve done a proper release, so there’s a backlog of fixed bugs and things I’ve been adding.

I’m posting the pre-release because so many things have changed, and I don’t want to do a 1.1 release, followed by an immediate 1.1.1. So please test! Then again, it’s taken me so long to explain the situation that I should have just posted it as 1.1.

And by the time you read this, it’ll probably be release 0177, or 0178, or…

Saturday, February 20, 2010 | processing  

Taking the “vs.” out of Man & Machine

Fascinating editorial from chess champion Gary Kasparov, about the relationship between humans and machines:

The AI crowd, too, was pleased with the result and the attention, but dismayed by the fact that Deep Blue was hardly what their predecessors had imagined decades earlier when they dreamed of creating a machine to defeat the world chess champion. Instead of a computer that thought and played chess like a human, with human creativity and intuition, they got one that played like a machine, systematically evaluating 200 million possible moves on the chess board per second and winning with brute number-crunching force. As Igor Aleksander, a British AI and neural networks pioneer, explained in his 2000 book, How to Build a Mind:

By the mid-1990s the number of people with some experience of using computers was many orders of magnitude greater than in the 1960s. In the Kasparov defeat they recognized that here was a great triumph for programmers, but not one that may compete with the human intelligence that helps us to lead our lives.

It was an impressive achievement, of course, and a human achievement by the members of the IBM team, but Deep Blue was only intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better.

He continues to describe playing games with humans aided by computers, and how it made the game even more dependent upon creativity:

Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point.

Or some of the other effects:

Having a computer partner also meant never having to worry about making a tactical blunder. The computer could project the consequences of each move we considered, pointing out possible outcomes and countermoves we might otherwise have missed. With that taken care of for us, we could concentrate on strategic planning instead of spending so much time on calculations. Human creativity was even more paramount under these conditions. Despite access to the “best of both worlds,” my games with Topalov were far from perfect. We were playing on the clock and had little time to consult with our silicon assistants. Still, the results were notable. A month earlier I had defeated the Bulgarian in a match of “regular” rapid chess 4–0. Our advanced chess match ended in a 3–3 draw. My advantage in calculating tactics had been nullified by the machine.

The final reinforces that I’d heard others describe Kasparov’s play as machine-like in the past (in a sense, this is verification or even quantification of that idea). It also includes some interesting comments on numerical scale:

The number of legal chess positions is 1040, the number of different possible games, 10120. Authors have attempted various ways to convey this immensity, usually based on one of the few fields to regularly employ such exponents, astronomy. In his book Chess Metaphors, Diego Rasskin-Gutman points out that a player looking eight moves ahead is already presented with as many possible games as there are stars in the galaxy. Another staple, a variation of which is also used by Rasskin-Gutman, is to say there are more possible chess games than the number of atoms in the universe. All of these comparisons impress upon the casual observer why brute-force computer calculation can’t solve this ancient board game. They are also handy, and I am not above doing this myself, for impressing people with how complicated chess is, if only in a largely irrelevant mathematical way.

And one last statement:

Our best minds have gone into financial engineering instead of real engineering, with catastrophic results for both sectors.

In the article, Kasparov mentions Moravec’s Paradox, described by Wikipedia as:

“contrary to traditional assumptions, the uniquely human faculty of reason (conscious, intelligent, rational thought) requires very little computation, but that the unconscious sensorimotor skills and instincts that we share with the animals require enormous computational resources”

And another interesting notion:

Marvin Minsky emphasizes that the most difficult human skills to reverse engineer are those that are unconscious. “In general, we’re least aware of what our minds do best,” he writes, and adds “we’re more aware of simple processes that don’t work well than of complex ones that work flawlessly.”

Saturday, February 20, 2010 | human, scale, simulation  

Dick Brass

An interesting op-ed by Dick Brass, a former Vice President at Microsoft on how their internal structure can get in the way of innovation, and citing specific examples. The first relates to ClearType and the difficulties of getting it integrated into other products:

Although we built it to help sell e-books, it gave Microsoft a huge potential advantage for every device with a screen. But it also annoyed other Microsoft groups that felt threatened by our success.

Engineers in the Windows group falsely claimed it made the display go haywire when certain colors were used. The head of Office products said it was fuzzy and gave him headaches. The vice president for pocket devices was blunter: he’d support ClearType and use it, but only if I transferred the program and the programmers to his control. As a result, even though it received much public praise, internal promotion and patents, a decade passed before a fully operational version of ClearType finally made it into Windows.

Or another case in attempts to build the Tablet PC, in stark contrast to Apple’s (obvious and necessary) redesign of iWork for their upcoming iPad:

Another example: When we were building the tablet PC in 2001, the vice president in charge of Office at the time decided he didn’t like the concept. The tablet required a stylus, and he much preferred keyboards to pens and thought our efforts doomed. To guarantee they were, he refused to modify the popular Office applications to work properly with the tablet. So if you wanted to enter a number into a spreadsheet or correct a word in an e-mail message, you had to write it in a special pop-up box, which then transferred the information to Office. Annoying, clumsy and slow.

Having spent time in engineering meetings where similar arguments were made, it’s interesting to see how that perspective translates into actual outcomes. ClearType has seemingly crawled its way to a modest success (though arguably was invented much earlier with Apple ][ displays), while Microsoft’s Tablet efforts remain a failure. But neither represent he common sense approach that has had such an influence on Apple’s success.

Update: A shockingly bad official response has been posted to Microsoft’s corporate blog. While I took the original article to be one person’s perspective, the lame retort (inline smiley face and all) does more to reinforce Brass’ argument.

Thursday, February 4, 2010 | cs, failure, software  

Design for Haiti

John Maeda put us in touch with Aaron Perry-Zucker, who writes:

I created Design for Obama and saw what a fully engaged, passionate, creative community can do. On that occasion, we were eager to lend our creative talents to a movement calling for change and inspire others to do the same.

Today we face a much graver task: In the wake of the unimaginable suffering that has befallen the island of Haiti, it is our job as artists and designers to use our talents to call for advocacy and understanding. Thanks to Design for Obama artist, James Nesbitt, we are now operating from designforhaiti.com.

Consider this a creative call to action to design:

Both are necessary; this is what artists and designers do best. Let us come together and lead the way to relief.

— Aaron Perry-Zucker

Thursday, January 21, 2010 | opportunities  

Dr. Baumol Talks Health Care Cost

Dr. Baumol, in red.Continuing my recent fascination/attention to health care, an interesting post on the New York Times site about the economics of increasing health costs, based on the ideas of William J. Baumol, who developed the notion of “cost disease”:

Dr. Baumol and a colleague, William G. Bowen, described the cost disease in a 1966 book on the economics of the performing arts. Their point was that some sectors of the economy are burdened by an inexorable rise in labor costs because they tend not to benefit from increased efficiency. As an example, they used a Mozart string quintet composed in 1787: 223 years later, it still requires five musicians and the same amount of time to play.

Essentially, making the point that no matter how much reform there is, the cost of care will still outpace inflation. The article (and theory) focuses on people as the most significant bottleneck, though I haven’t seen anything showing that in the current setting, the excessive increase in costs from the last ten years (and why the U.S. is paying twice other industrialized nations, for only average care) is tied to salaries. Tests, insurance cost, overhead, equipment all seem like things that the market can fix, but then again, I’m not much for Economics. In the end, the post is light on details (it’s a blog post, not a full article), but is interesting food for thought.

(Thanks to Teri for the link)

Monday, January 18, 2010 | healthcare, notaneconomist, Uncategorized  

New for 2010

Back in December, I made the decision to leave Seed and strike out on my own. As of January 1st (two weeks ago), I’m setting up shop in Cambridge. (That’s the fake Cambridge for you UK readers. Or, Cambridge like “MIT and Harvard” not “University Of”).

The federal government knows this new venture under the charmingly creative moniker of BEN FRY LLC, but with any luck, a proper name will be found soon so that I don’t have to introduce myself as Ben Fry, founder of Ben Fry LLC. (Which is even worse than having a site with your own name as the URL. I have Tom White—who originally registered the site as a joke—to thank for that.)

I’ll soon be hiring designers, developers, data people, and peculiar hybrids thereof. If you do the sort of work that you see on this site, please get in touch (send a message to mail at benfry.com). In particular I’d like to find people local to Cambridge/Boston, but because some of this will be project-oriented freelance work, some of it can be done at a distance.

Stay tuned, more to come.

(Update 1/21/2010 – Thanks for the responses. I’m having trouble keeping on top of my inbox so my apologies in advance if you don’t hear back from me promptly.)

Saturday, January 16, 2010 | opportunities, seed, site  
Book

Visualizing Data Book CoverVisualizing Data is my 2007 book about computational information design. It covers the path from raw data to how we understand it, detailing how to begin with a set of numbers and produce images or software that lets you view and interact with information. When first published, it was the only book(s) for people who wanted to learn how to actually build a data visualization in code.

The text was published by O’Reilly in December 2007 and can be found at Amazon and elsewhere. Amazon also has an edition for the Kindle, for people who aren’t into the dead tree thing. (Proceeds from Amazon links found on this page are used to pay my web hosting bill.)

Examples for the book can be found here.

The book covers ideas found in my Ph.D. dissertation, which is the basis for Chapter 1. The next chapter is an extremely brief introduction to Processing, which is used for the examples. Next is (chapter 3) is a simple mapping project to place data points on a map of the United States. Of course, the idea is not that lots of people want to visualize data for each of 50 states. Instead, it’s a jumping off point for learning how to lay out data spatially.

The chapters that follow cover six more projects, such as salary vs. performance (Chapter 5), zipdecode (Chapter 6), followed by more advanced topics dealing with trees, treemaps, hierarchies, and recursion (Chapter 7), plus graphs and networks (Chapter 8).

This site is used for follow-up code and writing about related topics.