The roof of the Metrodome springs a leak following heavy snow in Minnesota:
I’ve been looking at too many particle and fluid dynamics simulations because it looks fake to me — more like a simulation created by the structural engineers of what would happen if the roof were to collapse — rather than thousands of pounds of honest-to-goodness midwestern snow pummeling the turf seemingly in slow motion. Beautiful.
And another version from a local FOX affiliate in Minnesota:
Number of Processing users, every four weeks, since 2005:
Long version: this is a tally of the number of unique users who run the Processing environment every four weeks, as measured by the number of machines checking for updates.
In spite of the frequently proclaimed “death of Java” or “death of Java on the desktop,” we’re continuing to grow. This isn’t to say that Java on the desktop is undead, but this frustrating contradiction presents a considerable challenge for us… I’ll write more about that soon.
There’s a considerable (even comical) dip each January, when people decide that the holidays and drinking with their family is more fun than coding (or maybe that’s only my household). Things also tail off during the summer into August. These two trends are amplified due to the number of academic users, however other data I’ve seen (web traffic, etc) suggests that the rest of the world actually operates on something like the academic calendar as well.
About the data:
This is a very conservative estimate of the number of Processing users out there. Our software is free — we don’t have a lot to gain by inflating the numbers.
This covers only unique users — we don’t double count the same person in each 4-week period. Otherwise our numbers would be much higher.
This is not downloads, which are also significantly higher.
This is every four weeks, not every month. Unless there are 13 months in a year. Wait, how many months are in a year?
This only covers people who are using the actual Processing Development Environment — no Eclipse users, etc.
Use of processing.js or spinoff projects are not included.
This doesn’t include anyone who has disabled checking for updates.
This doesn’t include anyone not connected to the net.
The unique ID is stored in the preferences.txt file, so if a single login is used on a machine, that’s counting multiple people. Conversely, if you have multiple machines, you’ll be counted more than once.
Showing the data by day, week, or year all show the same overall trend.
This is a pretty lame visualization of the numbers, and I’m not even showing other interesting tidbits like what OS, version, and so on are in use. Maybe we can release the data if we can figure out an appropriate way to do so.
Exciting news! The short story is that there’s a new Processing Plug-in for Eclipse, and you can learn about it here.
The long story is that Chris Lonnen contacted me in the spring about applying for the Google Summer of Code (SoC) program, which I promptly missed the deadline for. But we eventually managed to put him to work anyway, via Fathom (our own SoC army of one, with Chris working from afar in western New York) with the task of working on a new editor that we can use to replace the current Processing Development Environment (the PDE).
After some initial work and scoping things out, we settled on the Eclipse RCP as the platform, with the task of first making a plug-in that works in the Eclipse environment (everything in Eclipse is a plug-in), which could then eventually become its own standalone editor to replace the current PDE.
Things are currently incomplete (again, see the Wiki page for more details), but give it a shot, file bugs (tag with Component-Eclipse when filing), and help lend Chris a hand in developing it further. Or if you have questions, be sure to use the forum. Come to think of it, might be time for a new forum section…
I wasn’t going to post this one, but I can’t get it out of my head. In the image below, the squares marked A and B are the same shade of gray.
The image is from Edward H. Adelson at MIT, and you can find my original source here. More details (proof, etc) on Adelson’s site here, which includes this explanation:
The visual system needs to determine the color of objects in the world. In this case the problem is to determine the gray shade of the checks on the floor. Just measuring the light coming from a surface (the luminance) is not enough: a cast shadow will dim a surface, so that a white surface in shadow may be reflecting less light than a black surface in full light. The visual system uses several tricks to determine where the shadows are and how to compensate for them, in order to determine the shade of gray “paint” that belongs to the surface.
The first trick is based on local contrast. In shadow or not, a check that is lighter than its neighboring checks is probably lighter than average, and vice versa. In the figure, the light check in shadow is surrounded by darker checks. Thus, even though the check is physically dark, it is light when compared to its neighbors. The dark checks outside the shadow, conversely, are surrounded by lighter checks, so they look dark by comparison.
A second trick is based on the fact that shadows often have soft edges, while paint boundaries (like the checks) often have sharp edges. The visual system tends to ignore gradual changes in light level, so that it can determine the color of the surfaces without being misled by shadows. In this figure, the shadow looks like a shadow, both because it is fuzzy and because the shadow casting object is visible.
The “paintness” of the checks is aided by the form of the “X-junctions” formed by 4 abutting checks. This type of junction is usually a signal that all the edges should be interpreted as changes in surface color rather than in terms of shadows or lighting.
As with many so-called illusions, this effect really demonstrates the success rather than the failure of the visual system. The visual system is not very good at being a physical light meter, but that is not its purpose. The important task is to break the image information down into meaningful components, and thereby perceive the nature of the objects in view.
(Like the earlier illusion post, this one’s also from my mother-in-law, who should apparently be writing this blog instead of its current—woefully negligent—author.)
Casey and I are in Chicago this weekend for the Processing+Android conference at UIC, organized by Daniel Sauter. In our excitement over the event, we posted revision 0191 last night (we tried to post from the back of Daniel’s old red Volvo, but Sprint’s network took exception). The release includes several Android-related updates, mostly fixed from Andres Colubri to improve how 3D works. Get the download here:
+ Fixed a bug in the camera handling. This was a quite urgent issue, since affected pretty much everything. It went unnoticed until now because the math error canceled out with the default camera settings. http://forum.processing.org/topic/possible-3d-bug
+ Also finished the implementation of the getImpl() method in PImage, so it initializes the texture of the new image in A3D mode. This makes the CubicVR example to work fine.
I think they’re saying to me, “You’ve done all this work. Well done… Here’s an award, now do more. Do better.” And it’s very nice, at my age, to be told by someone, that “we expect more from you. And here’s the means to help you achieve that.”
And if you’re not familiar with Carter’s name, you know his work: he created both Verdana and Georgia, at least one of which will be found on nearly any web site (the text you’re reading now is Georgia). Microsoft’s commission of these web fonts helped improve design on the web significantly in the mid-to-late 90s. Carter also developed several other important typefaces like Bell Centennial (back in the 70s), the tiny text found in phone books.
Why, yes! Sure enough, he’s written a version of the game for he Atari 2600.
You can play the game here, and if you don’t drown in the awesome (or die from laughing), you can now purchase prints here. Like the other distellamap prints, it shows how the image and code data coexist and interact inside an Atari 2600 cartridge games:
A detail of what it looks like up close:
(And as with the other prints, proceeds are given to charity.)
A recent Boston Globe editorial covers the issue of multiple, seemingly (if obviously) contradictory statements that come from complex research, in this case around the oil spill:
Last week, Woods Hole researchers reported a 22-mile-long underwater plume that they mapped out in the Gulf of Mexico in June — a finding indicating that much more oil may lie deep underwater and be degrading so slowly that it might affect the ecosystem for some time. Also last week, University of Georgia researchers estimated up to 80 percent of the spill may still be at large, with University of South Florida researchers finding poisoned plankton between 900 feet and 3,300 feet deep. This differed from the Aug. 4 proclamation by Administrator Jane Lubchenco of the National Oceanic and Atmospheric Administration that three-quarters of the oil was “completely gone’’ or dispersed and the remaining quarter was “degrading rapidly.’’
But then comes the Lawrence Berkeley National Laboratory, which this week said a previously unclassified species of microbes is wolfing down the oil with amazing speed. This means that all the scientists could be right, with massive plumes being decimated these past two months by an unexpected cleanup crew from the deep.
This is often the case for anything remotely complex: the opacity of the research process to the general public, the communication skills of various institutions, the differing perspective between what the public cares about (whose fault is it? how bad is it?) versus the interests of the researchers, and so on.
It’s a basic issue around communicating complex ideas, and therefore affects visualization too — it’s rare that there’s a single answer.
On a more subjective note, I don’t know if I agree with the premise of the editorial is that it’s on the government to sort out the mess for the public. It’s certainly a role of the government, though the sniping at the Obama administration makes the editorial writer sound one who is equally likely to bemoan government spending, size, etc. But I could write an equally (perhaps more) compelling editorial making the point that it’s actually the role of newspapers like the Globe to sort out newsworthy issues that concern the public. But sadly, the Globe, or at least the front page of boston.com, has been overly obsessed with more click-ready topics like the Craigslist killer (or any other rapist, murderer, or stomach-turning story involving children du jour) and playing “gotcha” with spending and taxes for universities and public officials. What a bunch of ghouls.
(Thanks to my mother-in-law for the article link.)
Ben Fry LLC now has a proper name, and it is Fathom. Or if you want to be formal about it, “Fathom Information Design”.
And today we launched a new site, fathom.info, for our work. (I’ll still be using benfry.com for my older research projects, Processing updates, software and visualization ramblings, book updates…)
We also have a new project that launched yesterday with GE, this time looking at shifts in age within world populations. A little more info about it is on the Fathom updates page (some might call it a blog). And when we have a chance, we hope to post a bit more of the process behind the piece.
More bug fixes, and one new treat for OS X users. Hopefully we’re about set
to call this one 1.2. Please test and report any issues you find.
[ additions ]
+ On Mac OS X, you’re no longer required to have a sketch window open at
all times. This will make the application feel more Mac-like–a little
more elegant and trendy and smug with superiority.
+ Added a warning to the Linux version to tell users that they should be
using the official version of Java from Sun if they’re not. http://wiki.processing.org/w/Supported_Platforms#Linux There isn’t a perfect way to detect whether Sun Java is in use,
so please let us know how it works or if you have a better idea.
+ Add getDocumentBase() version of createInput() for Internet Explorer.
Without this, sketches will crash when trying to find files on a web server
that are not in the exported .jar file. This fix is only for IE. Yay IE!
Just posted release 0185 of Processing on the download page. It’s a pre-release for what will eventually become 1.2 or 1.5. Please test and file bugs if you find problems. The list revisions are below:
PROCESSING 0185 – 20 June 2010
Primarily a bug fix release. The biggest change are a couple tweaks for problems caused by Apple’s Update 2 for Java on OS X, so this should make Processing usable on Macs again.
+ Option to change the default naming of sketches via preferences.txt.
First, you can change the prefix, which defaults to:
And the suffix is handled using dates. The current default (since 1.0) is:
Or if you want to switch back to the old (six digit) style, you could use:
+ Updated bundled JRE/tools to 6u20 for Windows and Linux
+ Several SVG fixes and additions, including some tweaks from PhiLho. These changes will be documented in a future release once the API changes are complete.
Our main leisure activity is, by a long shot, participating in experiences that we know are not real. When we are free to do whatever we want, we retreat to the imagination—to worlds created by others, as with books, movies, video games, and television (over four hours a day for the average American), or to worlds we ourselves create, as when daydreaming and fantasizing. While citizens of other countries might watch less television, studies in England and the rest of Europe find a similar obsession with the unreal.
Another portion talks about emotional response:
The emotions triggered by fiction are very real. When Charles Dickens wrote about the death of Little Nell in the 1840s, people wept—and I’m sure that the death of characters in J.K. Rowling’s Harry Potter series led to similar tears. (After her final book was published, Rowling appeared in interviews and told about the letters she got, not all of them from children, begging her to spare the lives of beloved characters such as Hagrid, Hermione, Ron, and, of course, Harry Potter himself.) A friend of mine told me that he can’t remember hating anyone the way he hated one of the characters in the movie Trainspotting, and there are many people who can’t bear to experience certain fictions because the emotions are too intense. I have my own difficulty with movies in which the suffering of the characters is too real, and many find it difficult to watch comedies that rely too heavily on embarrassment; the vicarious reaction to this is too unpleasant.
Inspired by this post by Kurt Opsahl of the EFF, Matt McKeon of IBM’s Visual Communication Lab created the following visualization depicting the evolution of the default privacy settings on Facebook:
Has a couple nice visual touches that prevent it from looking like YAHSVPOQUFOTI (yet another highly-stylized visualization piece of questionable utility found on the internet). Also cool to see it was built with Processing.js.
Allie Brosh, who appears to be some sort of genius, brings us definitive arguments in the cake versus pie debate. Best to read the entire treatise, but here are a few highlights on how clearly pie defeats cake:
Ability of enjoyment to be sustained over time
Couldn’t agree more: it always seems like a good idea on the first bite, and then I catch myself. What am I doing? I hate cake. Another graphic:
Unequal frosting distribution is a problem
I grew up requesting pie for my birthday (strawberry rhubarb, thank you very much) instead of cake. This resonates. More importantly (for this site), Brosh cites the enormous impact of pie vs. cake for information design and visualization:
Visualizing Data is my book about computational information design. It covers the path from raw data to how we understand it, detailing how to begin with a set of numbers and produce images or software that lets you view and interact with information. Unlike nearly all books in this field, it is a hands-on guide intended for people who want to learn how to actually build a data visualization.
The text was published by O’Reilly in December 2007 and can be found at Amazon and elsewhere. Amazon also has an edition for the Kindle, for people who aren’t into the dead tree thing. (Proceeds from Amazon links found on this page are used to pay my web hosting bill.)
The book covers ideas found in my Ph.D. dissertation, which is basis for Chapter 1. The next chapter is an extremely brief introduction to Processing, which is used for the examples. Next is (chapter 3) is a simple mapping project to place data points on a map of the United States. Of course, the idea is not that lots of people want to visualize data for each of 50 states. Instead, it’s a jumping off point for learning how to lay out data spatially.
The chapters that follow cover six more projects, such as salary vs. performance (Chapter 5), zipdecode (Chapter 6), followed by more advanced topics dealing with trees, treemaps, hierarchies, and recursion (Chapter 7), plus graphs and networks (Chapter 8).
This site is used for follow-up code and writing about related topics.