My friends down the street at Small Design Firm (started by Media Lab alum and namesake David Small) are looking for a programmer-designer type:
Small Design Firm is an interactive design studio that specializes in museum exhibits, information design, dynamic typography and interactive art. We write custom graphics software and build unique physical installations and media environments. Currently our clients include the Metropolitan Museum of Art, United States Holocaust Memorial Museum and Maya Lin.
We are looking to hire an individual with computer programming and design/art/architecture skills. Applicants should have a broad skill set that definitely includes C++ programming experience and an interest in the arts. This position is open to individuals with a wide variety of experiences and specialities. Our employees have backgrounds in computer graphics, typography, electrical engineering, architecture, music, and physics.
Responsibilities will be equally varied. You will be programming, designing, writing proposals, working directly with clients, managing content and production, and fabricating prototypes and installations.
Small Design Firm is an energetic and exciting place to work. We are a close-knit community, so we are looking for an outgoing team member who is willing to learn new skills and bring new ideas to the group.
Salary is commensurate with experience and skill set. Benefits include health insurance, SIMPLE IRA, and paid vacation.
Contact john (at) smalldesignfirm.com if you’re interested.
Last week the National Institutes of Health (NIH) modified their policy for posting and accessing genome-wide association studies (GWAS) data contained in NIH databases. They have removed public access to aggregate genotype GWAS data in response to the publication of new statistical techniques for analyzing dense genomic information that make it possible to infer the group assignment (case vs. control) of an individual DNA sample under certain circumstances. The Wellcome Trust Case Control Consortium in the UK and the Broad Institute of MIT and Harvard in Boston have also removed aggregate data from public availability. Consequently, UCSC has removed the “NIMH Bipolar” and “Wellcome Trust Case Control Consortium” data sets from our Genome Browser site.
The ingredients for a genome-wide association study are a few hundred people, and a list of what genetic letter (A, C, G, or T) is found at a few hundred specific locations in the DNA of each of those people. Such data is then correlated to whether individuals have a particular disease, and using the correlation, it’s possible to sometimes localize what part of the genome is responsible for the disease.
Of course, the diseases might be of a sensitive nature (e.g. bipolar disorder), so when such data is made publicly available, it’s done in a manner that protects the privacy of the individuals in the data set. What this message means is that a bioinformatics method has been developed that undermines those privacy protections. An amazing bit of statistics!
This made me curious about what led to such a result, so with a little digging, I found this press release, which describes the work:
A team of investigators led by scientists at the Translational Genomics Research Institute (TGen) have found a way to identify possible suspects at crime scenes using only a small amount of DNA, even if it is mixed with hundreds of other genetic fingerprints.
Using genotyping microarrays, the scientists were able to identify an individual’s DNA from within a mix of DNA samples, even if that individual represented less than 0.1 percent of the total mix, or less than one part per thousand. They were able to do this even when the mix of DNA included more than 200 individual DNA samples.
The discovery could help police investigators better identify possible suspects, even when dozens of people over time have been at a crime scene. It also could help reassess previous crime scene evidence, and it could have other uses in various genetic studies and in statistical analysis.
Links to much more coverage can be found here, which includes major journals (Nature) and mainstream media outlets (LA Times, Financial Times) weighing in on the research. (It’s always funny to see how news outlets respond to this sort of thing—the Financial Times talk about the positive side, the LA Times focuses exclusively on the negative.) A discussion about the implications of the study can also be found on the PLoS site, with further background from the study’s primary author.
Science presents such fascinating contradictions. A potentially helpful advance that undermines another area of research. The breakthrough that opens a Pandora’s Box. It’s probably rare to see such a direct contradiction (that’s not heavily politicized like, say, stem cell research), but the social and societal impact is undoubtedly one of the things I love most about genetics in particular.
Finally, the infographic I’ve been waiting for, the Washington Post compares the tax proposals of United States presidential candidates John McCain and Barack Obama:
Lots of words have been spilled over the complexities of tax policy, whether in stump speeches, advertisements, or policy papers. But these are usually distilled for voters in lengthy articles that throw more words at the problem. But compare even a well-written article like this one at Business Week versus the graphic above from the Washington Post. Which of the two will you be able to remember tomorrow?
I also appreciate that the graphic very clearly represents the general tax policies of Republicans vs. Democrats, without showing bias toward either. The only thing that’s missing is a sense of how big each of the categories are – how many people are in the “over $2.87 million” category versus how many are in the “$66,000 to $112,000” category, which would help convey a better sense of the “middle class” term that candidates like to throw around.
There is still greater complexity to the debate than what’s shown in this image (the Business Week article describes treasury shortfalls based on the McCain proposal, for instance), but without the initial explanation provided by that graphic, will voters even bother with those details?
Given some number of talented people, success is not particularly surprising. But sustaining that success in a creative organization, the way that Pixar has over the last fifteen years is truly exceptional. Ed Catmull, cofounder of Pixar (and computer graphics pioneer) writes about their success for the Harvard Business Review:
Unlike most other studios, we have never bought scripts or movie ideas from the outside. All of our stories, worlds, and characters were created internally by our community of artists. And in making these films, we have continued to push the technological boundaries of computer animation, securing dozens of patents in the process.
People tend to think of creativity as a mysterious solo act, and they typically reduce products to a single idea: This is a movie about toys, or dinosaurs, or love, they’ll say. However, in filmmaking and many other kinds of complex product development, creativity involves a large number of people from different disciplines working effectively together to solve a great many problems. The initial idea for the movie—what people in the movie business call “the high concept”—is merely one step in a long, arduous process that takes four to five years.
A movie contains literally tens of thousands of ideas.
On Taking Risks:
…we as executives have to resist our natural tendency to avoid or minimize risks, which, of course, is much easier said than done. In the movie business and plenty of others, this instinct leads executives to choose to copy successes rather than try to create something brand-new. That’s why you see so many movies that are so much alike. It also explains why a lot of films aren’t very good. If you want to be original, you have to accept the uncertainty, even when it’s uncomfortable, and have the capability to recover when your organization takes a big risk and fails. What’s the key to being able to recover? Talented people!
Reminding us that we learn more from failure, the more interesting part of the article talks about how Pixar responded to early failures in Toy Story 2:
Toy Story 2 was great and became a critical and commercial success—and it was the defining moment for Pixar. It taught us an important lesson about the primacy of people over ideas: If you give a good idea to a mediocre team, they will screw it up; if you give a mediocre idea to a great team, they will either fix it or throw it away and come up with something that works.
Toy Story 2 also taught us another important lesson: There has to be one quality bar for every film we produce. Everyone working at the studio at the time made tremendous personal sacrifices to fix Toy Story 2. We shut down all the other productions. We asked our crew to work inhumane hours, and lots of people suffered repetitive stress injuries. But by rejecting mediocrity at great pain and personal sacrifice, we made a loud statement as a community that it was unacceptable to produce some good films and some mediocre films. As a result of Toy Story 2, it became deeply ingrained in our culture that everything we touch needs to be excellent.
On mixing art and technology:
[Walt Disney] believed that when continual change, or reinvention, is the norm in an organization and technology and art are together, magical things happen. A lot of people look back at Disney’s early days and say, “Look at the artists!” They don’t pay attention to his technological innovations. But he did the first sound in animation, the first color, the first compositing of animation with live action, and the first applications of xerography in animation production. He was always excited by science and technology.
At Pixar, we believe in this swirling interplay between art and technology and constantly try to use better technology at every stage of production. John coined a saying that captures this dynamic: “Technology inspires art, and art challenges the technology.”
I saw Catmull speak to the Computer Science department a month or two before I graduated from Carnegie Mellon. Toy Story had been released two years earlier, and 20 or 30 of us were all jammed into a room listening to this computer graphics legend speaking about…storytelling. The importance of narrative. How the movies Pixar was creating had less to do with the groundbreaking computer graphics (the reason that most were in the room) than it did with a good story. This is less shocking nowadays, especially if you’ve ever seen a lecture by someone from Pixar, but the scene left an incredible impression on me. It was a wonderful message to the programmers in attendance about the importance of placing purpose before the technology, but without belitting the importance of either.
(While digging for an image to illustrate this post, I also found this review of The Pixar Touch: The Making of a Company, a book that seems to cover similar territory as the HBR article, but from the perspective of an outside author. The image is stolen from Ricky Grove’s review.)
Reminds me of taking all the pages of my Ph.D. dissertation (a hundred or so) and organizing them on the floor of a friend’s living room. (Luckily it was a large living room.) It was extremely helpful and productive but frightened my friend who returned home to a sea of paper and a guy who had been indoors all day sitting in the middle of it with a slightly wild look in his eyes.
(Thanks to Jason Leigh, who mentioned the photos during his lecture at last week’s iCore summit in Banff.)
Don LaFontaine, voice artist for some 5,000 movies and 350,000 advertisements passed away Monday. He’s the man who came up with the “In A World…” that begins most film trailers, as well as the baritone voice style that goes with it. The Washington Post has an obituary.
In the early 1960s, he landed a job in New York with National Recording Studios, where he worked alongside radio producer Floyd L. Peterson, who was perfecting radio spots for movies. Until then, movie studios primarily relied on print advertising or studio-made theatrical trailers. The two men became business partners and, together, perfected the familiar format.
Mr. LaFontaine, who was editing, writing and producing in the early days of the partnership, became a voice himself by accident. In 1964, when an announcer failed to show up for a job, he recorded himself reading copy and sent it to the studio with a message: “This is what it’ll sound like when we get a ‘real’ announcer.”
Trailer for The Elephant Man, proclaimed to be his favorite:
And a short interview/documentary:
Don’s impact is unmistakable, and it’s striking to think of how his approach changed movie advertising. May he rest in peace.
But in fact, nearly two centuries after the publication of his famous folios, it is Audubon’s technique, and not the sharp eye of the modern camera, that prevails in a wide variety of reference books. For bird-watchers, the best guides, the most coveted guides – like those by David Allen Sibley and Roger Tory Peterson – are still filled with hand-painted images. The same is true for similar volumes on fish, trees, and even the human body. Ask any first-year medical student what they consult during dissections, and they will name Dr. Frank H. Netter’s meticulously drafted “Atlas of Human Anatomy.” Or ask architects and carpenters to see their structures, and they will often show you chalk and pencil “renderings,” even after the things have been built and professionally photographed.
This nicely reinforces the case for drawing, and why it’s so powerful. The article later gets to the meat of the issue, which is the same reason that drawing is a topic on a site about data visualization.
Besides seamlessly imposing a hierarchy of information, the handmade image is also free to present its subject from the most efficient viewpoint. Audubon sets a high standard in this regard; he is often at pains to depict the beak in its most revealing profile, the crucial feathers at an identifiable angle, the front leg extended just so. When the nighthawk and the whip-poor-will are pictured in full flight, their legs tucked away, he draws the feet at the side of the page, so we’re not left guessing. If Audubon draws a bird in profile, as he does with the pitch-black rook and the grayer hooded crow, we’re not missing any details a three-quarters view would have shown.
And finally, a reminder:
Confronted with unprecedented quantities of data, we are constantly reminded that quality is what really matters. At a certain point, the quality and even usefulness of information starts being defined not by the precision and voracity of technology, but by the accuracy and circumspection of art. Seen in this context, Audubon shows us that painting is not just an old fashioned medium: it is a discipline that can serve as a very useful filter, collecting, editing, and carefully synthesizing information into a single efficient and evocative image – giving us the information that we really want, information we can use and, as is the case with Audubon, even cherish.
Consider this your constant reminder, because I think it’s actually quite rare that quality is acknowledged. I regularly attend lectures by speakers who boast about how much data they’ve collected and the complexity of their software and hardware, but it’s one in ten thousand who even mention the art of removing or ignoring data in search of better quality.
Looks like the Early Drawings book mentioned in the article will be available at the end of September.
BusinessWeek has an excerpt of Numerati, a book about the fabled monks of data mining (publishers weekly calls them “entrepreneurial mathematicians”) who are sifting through the personal data we create every day.
Picture an IBM manager who gets an assignment to send a team of five to set up a call center in Manila. She sits down at the computer and fills out a form. It’s almost like booking a vacation online. She puts in the dates and clicks on menus to describe the job and the skills needed. Perhaps she stipulates the ideal budget range. The results come back, recommending a particular team. All the skills are represented. Maybe three of the five people have a history of working together smoothly. They all have passports and live near airports with direct flights to Manila. One of them even speaks Tagalog.
Everything looks fine, except for one line that’s highlighted in red. The budget. It’s $40,000 over! The manager sees that the computer architect on the team is a veritable luminary, a guy who gets written up in the trade press. Sure, he’s a 98.7% fit for the job, but he costs $1,000 an hour. It’s as if she shopped for a weekend getaway in Paris and wound up with a penthouse suite at the Ritz.
Hmmm. The manager asks the system for a cheaper architect. New options come back. One is a new 29-year-old consultant based in India who costs only $85 per hour. That would certainly patch the hole in the budget. Unfortunately, he’s only a 69% fit for the job. Still, he can handle it, according to the computer, if he gets two weeks of training. Can the job be delayed?
This is management in a world run by Numerati.
I’m highly skeptical of management (a fundamentally human activity) being distilled to numbers in this manner. Unless, of course, the managers are that poor at doing their job. And further, what’s the point of the manager if they’re spending most of their time filling out the vacation form-style work order? (Filling out tedious year-end reviews, no doubt.) Perhaps it should be an indication that the company is simply too large:
As IBM sees it, the company has little choice. The workforce is too big, the world too vast and complicated for managers to get a grip on their workers the old-fashioned way—by talking to people who know people who know people.
Then we descend (ascend?) into the rah-rah of today’s global economy:
Word of mouth is too foggy and slow for the global economy. Personal connections are too constricted. Managers need the zip of automation to unearth a consultant in New Delhi, just the way a generation ago they located a shipment of condensers in Chicago. For this to work, the consultant—just like the condensers—must be represented as a series of numbers.
I say rah-rah because how else can you put refrigeration equipment parts in the same sentence as a living, breathing person with a mind, free will and a life.
And while I don’t think I agree with this particular thesis, the book as a whole looks like an interesting survey of efforts in this area. Time to finish my backlog of Summer reading so I can order more books…
Visualizing Data is my book about computational information design. It covers the path from raw data to how we understand it, detailing how to begin with a set of numbers and produce images or software that lets you view and interact with information. Unlike nearly all books in this field, it is a hands-on guide intended for people who want to learn how to actually build a data visualization.
The text was published by O’Reilly in December 2007 and can be found at Amazon and elsewhere. Amazon also has an edition for the Kindle, for people who aren’t into the dead tree thing. (Proceeds from Amazon links found on this page are used to pay my web hosting bill.)
The book covers ideas found in my Ph.D. dissertation, which is basis for Chapter 1. The next chapter is an extremely brief introduction to Processing, which is used for the examples. Next is (chapter 3) is a simple mapping project to place data points on a map of the United States. Of course, the idea is not that lots of people want to visualize data for each of 50 states. Instead, it’s a jumping off point for learning how to lay out data spatially.
The chapters that follow cover six more projects, such as salary vs. performance (Chapter 5), zipdecode (Chapter 6), followed by more advanced topics dealing with trees, treemaps, hierarchies, and recursion (Chapter 7), plus graphs and networks (Chapter 8).
This site is used for follow-up code and writing about related topics.