Writing

Food Fight!

As reported here and here, Apple has updated the language in the latest release of their iPhone/iPad developer tools to explicitly disallow development with other tools and languages:

3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).

I’m happy that Apple is being explicit about this sort of thing, rather than their previous passive aggressive stance that gave more wiggle room for their apologists. This is a big “screw you” to Adobe in particular, who had been planning to release a Flash-to-iPhone converter with Creative Suite 5. I understand why they’re doing it, but in the broader scheme of what’s at stake, why pick a fight with one of the largest software vendors for the Mac?

In addition to being grounded in total, obsessive control over the platform, the argument seems to be that the only way to make a proper iPhone/iPad experience is to build things with their tools, as a way to prevent people from developing for multiple platforms at once. This has two benefits: first, it encourages developers to think within the constraints and affordances of the platform, and second, it forces potential developers to make a choice of which platform they’re going to support. It’s not quite doubling the amount of work that would go into creating an app for both, say, the iPhone and Android, but it’s fairly close. So what will people develop for? The current winner with all the marketing and free hype from the press.

To be clear, developing within the constraints of a platform is incredibly important for getting an application right. But using Apple’s sanctioned tools doesn’t guarantee that, and using a legal document to enforce said tools steps into the ridiculous.

Fundamentally, I think the first argument — that to create a decent application you have to develop a certain way, with one set of tools — is bogus. It’s a lack of trust in your developers and even moreso, a distrust of the market. In the early days of the Macintosh, it was difficult to get companies to rework their DOS (or even Apple II) applications to use the now-familiar menu bars and icons. The Human Interface Guidelines addressed it specifically. And when companies ignored those warnings, and released software that was a clear port from a DOS equivalent, people got upset and the software got trashed. Just search for the phrase “not mac-like” and you’ll get the picture. Point being, people came around on developing for the Mac, and it didn’t require a legal document saying that developers had to use MPW and ResEdit.

The market demanded software that felt like Macintosh applications, and it’s the same for the iPhone and iPad. On the tools side, the free choice also meant that the market produced far better tools than what Apple provided — instead of the archaic MPW (ironically, itself something of a terminal application), Think Pascal, Lightspeed C, Metrowerks Codewarrior, and even Resorcerer all filled in various gaps at different times, all providing a better platform than (or at least a suitable alternative to) Apple’s tools.

But like this earlier post, it seems like Apple is being run by someone who is re-fighting battles of the 80s and 90s, but whose personal penchant for control prevents him from learning from the outcomes. That rhyming sound you hear? It’s history.

Friday, April 9, 2010 | cs, languages, mobile, software  

JavaScript: The Good Parts

Watched Douglas Crockford’s “JavaScript: The Good Parts” talk, based on his book of the same name. I like Crockford’s work on JSON—or rather, the idea of simple file formats that need simple APIs to work with them. More important, with the continued evolution of processing.js, I’m really optimistic about where things are headed with JavaScript. (You might say I’m feeling a bit hopey changey about it.) I’ve had Crockford’s book in my reading pile for a while and finally got around to watching the talk last week.

I was at Netscape (or maybe at Sun?) when they renamed their “LiveScript” language to “JavaScript” (because Java was the it-language at the time) and I’d avoided it for a long time. His talk points out a series of things to avoid from the JavaScript syntax, in fact I think I enjoyed the explanation of the “Bad Parts” a bit more. By clearing out a few things, the whole starts making more sense. But it’s an interesting discussion for people scratching their head about this incredibly pervasive language found in web browsers, and rapidly becoming more exciting as support for Canvas and WebGL evolve.

Tuesday, February 23, 2010 | cs, languages, processing, speaky  

Pirates of Statistics

Pirates of Rrrrr!Article from the New York Times a bit ago, covering R, everyone’s favorite stats package:

R is also the name of a popular programming language used by a growing number of data analysts inside corporations and academia. It is becoming their lingua franca partly because data mining has entered a golden age, whether being used to set ad prices, find new drugs more quickly or fine-tune financial models. Companies as diverse as Google, Pfizer, Merck, Bank of America, the InterContinental Hotels Group and Shell use it.

R is also open source, another focus of the article, which includes quoted gems such as this one from commercial competitor SAS:

Closed source: it’s got what airplanes crave!“I think it addresses a niche market for high-end data analysts that want free, readily available code,” said Anne H. Milley, director of technology product marketing at SAS. She adds, “We have customers who build engines for aircraft. I am happy they are not using freeware when I get on a jet.”

Pure gold: free software is scary software! And freeware? Is she trying to conflate R with free software downloads from CNET?

Truth be told, I don’t think I’d want to be on a plane that used a jet engine designed or built with SAS (or even R, for that matter). Does she know what her product does? (A hint: It’s a statistics package. You might analyze the engine with it, but you don’t use it for design or construction.)

For those less familiar with the project, some examples:

…companies like Google and Pfizer say they use the software for just about anything they can. Google, for example, taps R for help understanding trends in ad pricing and for illuminating patterns in the search data it collects. Pfizer has created customized packages for R to let its scientists manipulate their own data during nonclinical drug studies rather than send the information off to a statistician.

At any rate, many congratulations to Robert Gentleman and Ross Ihaka, the original creators, for their success. It’s a wonderful thing that they’re making enough of a rumpus that a stats package is being covered in a mainstream newspaper.

Arrrr!

Tuesday, January 27, 2009 | languages, mine, software  

Wet and Dry Ingredients; Mixing Bowls and Baking Dishes

51mrbt0099l_ss400_.jpgDigging through my reading list pile, I begin skimming through A Box, Darkly: Obfuscation, Weird Languages, and Code Aesthetics by Michael Mateas and Nick Montfort. I was moving along pretty good until I reached the description of the Chef programming language:

Another language, Chef, illustrates different design decisions for structuring play. Chef facilities double-coding programs as recipes. Variables are declared in an ingredients list, with amounts indicating the initial value (e.g., 114 g of red salmon). The type of measurement determines whether an ingredient is wet or dry; wet ingredients are output as characters, dry ingredients are output as numbers. Two types of memory are provided, mixing bowls and baking dishes. Mixing bowls hold ingredients which are still being manipulated, while baking dishes hold collections of ingredients to output. What makes Chef particularly interesting is that all operations have a sensible interpretation as a step in a food recipe. Where Shakespeare programs parody Shakespearean plays, and often contain dialog that doesn’t work as dialog in a play (“you are as hard as the sum of yourself and a stone wall”), it is possible to write programs in Chef that might reasonably be carried out as a recipe. Chef recipes do have the unfortunate tendency to produce huge quantities of food, however, particularly because the sous-chef may be asked to produce sub-recipes, such as sauces, in a loop.

Wonderful. (And a nice break for someone who has been fretting about languages and syntax over the last couple weeks.)

Friday, December 12, 2008 | languages  

Is Processing a Language?

This question is covered in the FAQ on Processing.org, but still tends to reappear on the board every few months (most recently here). Someone once described Processing syntax as a dialect of Java, which sounds about right to me. It’s syntax that we’ve added on top of Java to make things a little easier for a particular work domain (roughly, making visual things). There’s also a programming environment that significantly simplifies what’s found in traditional IDEs. Plus there’s a core API set (and a handful of core libraries) that we’ve built to support this type of work. If we did these in isolation, none would really stick out:

  • The language changes are pretty minimal. The big difference is probably how they integrate with the IDE that’s built around the idea of sitting down and quickly writing code (what we call sketching). We don’t require users to first learn class definitions or even method declarations before they can show something on the screen, which helps avoid some of the initial head-scratching that comes from trying to explain “public class” or “void” or beginning programmers. For more advanced coders, it helps Java feel a bit more like scripting. I use a lot of Perl for various tasks, and I wanted to replicate the way you can write 5-10 lines of Perl (or Python, or Ruby, or whatever) and get something done. In Java, you often need double that number of lines just to set up your class definitions and a thread.
  • The API set is a Java API. It can be used with traditional Java IDEs (Eclipse, Netbeans, whatever) and a Processing component can be embedded into other applications. But without the rest of it (the syntax and IDE), Processing (API or otherwise) it would not be as widely used as it is today. The API grew out of Casey and I’s work, and our like/dislike of various approaches used by libraries that we’ve used: Postscript, QuickDraw, OpenGL, Java AWT, even Applesoft BASIC. Can we do OpenGL but still have it feel as simple as writing graphics code on the Apple ][? Can we simplify current graphics approaches so that they at least feel simpler like the original QuickDraw on the Mac?
  • The IDE is designed to make Java-style programming less wretched. Check out the Integration discussion board to see just how un-fun it is to figure out how the Java CLASSPATH and java.library.path work, or how to embed AWT and Swing components. These frustrations and complications sometimes are even filed as bugs in the Processing bugs database by users who have apparently become spoiled by not having to worry about such things.

If pressed, perhaps the language itself is probably the easiest to let go of—witness the Python, Ruby and now JavaScript versions of the API, or the C++ version that I use for personal work (when doing increasingly rare C++ projects). And lots of people build Processing projects without the preprocessor and PDE.

In some cases, we’ve even been accused of not being clear that it’s “just Java,” or even that Processing is Java with a trendy name. Complaining is easier than reading, so there’s not much we can do for people who don’t glance at the FAQ before writing their unhappy screeds. And with the stresses of the modern world, people need to relieve themselves of their angst somehow. (On the other hand, if you’ve met either of us, you’ll know that Casey and I are very trendy people, having grown up in the farmlands of Ohio and Michigan.)

However, we don’t print “Java” on every page of Processing.org for a very specific reason: knowing it’s Java behind the scenes doesn’t actually help our audience. In fact, it usually causes more trouble than not because people expect it to behave exactly like Java. We’ve had a number of people who copy and pasted code from the Java Tutorial into the PDE, and are confused when it doesn’t work.

(Edit – In writing this, I don’t want to understate the importance of Java, especially in the early stages of the Processing project. It goes without saying that we owe a great deal to Sun for developing, distributing, and championing Java. It was, and is, the best language/environment on which to base the project. More about the choice of language can be found in the FAQ.)

But for as much trouble as the preprocessor and language component of Processing is for us to develop (or as irrelevant it might seem to programmers who already code in Java), we’re still not willing to give that up—damned if we’re gonna make students learn how to write a method declaration and “public class Blah extends PApplet” before they can get something to show up on the screen.

I think the question is a bit like the general obsession of people trying to define Apple as a hardware or software company. They don’t do either—they do both. They’re one of the few to figure out that the distinction actually gets in the way of delivering good products.

Now, whether we’re delivering a good product is certainly questionable—the analogy with Apple may, uh, end there.

Wednesday, August 27, 2008 | languages, processing, software  

Glagolitic Capital Letter Spidery Ha

spidery-170x205.pngA great Unicode in 5 Minutes presentation from Mark Lentczner at Linden Lab. He passed it along after reading this dense post, clearly concerned about the welfare of my readers.

(Searching out the image for the title of this post also led me to a collection of Favourite Unicode Codepoints. This seems ripe for someone to waste more time really tracking down such things and documenting them.)

Mark’s also behind Context Free, one of the “related initiatives” that we have listed on Processing.org.

Context Free is a program that generates images from written instructions called a grammar. The program follows the instructions in a few seconds to create images that can contain millions of shapes.

Grammars are covered briefly in the Parse chapter of vida, with the name of the language coming from a specific variety called Context Free Grammars. The magical (and manic) part of grammars is that their rules tend to be recursive and layered, which leads to a certain kind of insanity as you try to tease out how the rules work. With Context Free, Mark has instead turned this dizziness into the basis for creating visual form.

Updated 14 May 08 to fix the glyph. Thanks to Paul Oppenheim, Spidery Ha Devotee, for the correction.

Monday, May 12, 2008 | feedbag, languages, parse, unicode  
Book

Visualizing Data Book CoverVisualizing Data is my 2007 book about computational information design. It covers the path from raw data to how we understand it, detailing how to begin with a set of numbers and produce images or software that lets you view and interact with information. When first published, it was the only book(s) for people who wanted to learn how to actually build a data visualization in code.

The text was published by O’Reilly in December 2007 and can be found at Amazon and elsewhere. Amazon also has an edition for the Kindle, for people who aren’t into the dead tree thing. (Proceeds from Amazon links found on this page are used to pay my web hosting bill.)

Examples for the book can be found here.

The book covers ideas found in my Ph.D. dissertation, which is the basis for Chapter 1. The next chapter is an extremely brief introduction to Processing, which is used for the examples. Next is (chapter 3) is a simple mapping project to place data points on a map of the United States. Of course, the idea is not that lots of people want to visualize data for each of 50 states. Instead, it’s a jumping off point for learning how to lay out data spatially.

The chapters that follow cover six more projects, such as salary vs. performance (Chapter 5), zipdecode (Chapter 6), followed by more advanced topics dealing with trees, treemaps, hierarchies, and recursion (Chapter 7), plus graphs and networks (Chapter 8).

This site is used for follow-up code and writing about related topics.