‘hello world’ on the Replicator 2

20 minutes set up, 10 minutes to print. A whole new world, endless itches to be scratched, and lots of extruded, environmentally safe (comparative), plastic gewgaws, tools, parts, and maybe even inventions to come.

(This is a print that comes pre-installed with the SD card you use to load files into the Replicator. The chains move freely after a gentle snap of the connective plastic printed with them.)

Digital Age Requires Fluid Mental Models

In the last few months, digital friends of mine and I have been asking ourselves: are we going to hit a point when we get stuck in our ways and try to make everything fit into what we learned in the 2000s?

It’s a common experience for people who come up digital and are digital change agents to encounter deeply held beliefs about how things should be done in a particular industry. Trying to bring new ideas, new processes, new approaches to ideation, digital change agents (for lack of a better phrase) frequently encounter one of two reactions: 1) “no, no that’s not how we work” to reject the new ideas; or 2) “well, yes we can do that, but it’s still in the service of _______” (insert some long-held truth about the business or industry).

One of the skills that has been forced on digital practitioners of all stripes is the ability to change mental models year to year, hold them simultaneously, or swap them out within the course of a day. Since 1994, there have been very few “truths” that have held primacy or even sway for more than 18 – 24 months. Technology standards and languages change – remember when we are all scrambling to learn Flash, or some version of SQL was the norm for data? That was only 5 years ago, and today we’re scrambling to re-embrace HTML and unstructured noSQL data.

The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.
– F Scott Fitzgerald

This isn’t a question of intellectual character, so much as a necessity – you just do it, cuz you have to, you adjust your habits, pick up new rhythms, stretch different muscles in your brain, and it’s something you suddenly how to do. As my piano teacher says to me: “practice doesn’t make, practice makes permanent.”

Personal computers and the internet are upending industries, challenging old assumptions, creating new possibilities, and forcing us to revisit fundamentals over and over again. Trying to apply what I knew about the internet and how people use computers in 2005 – before Facebook, while Encyclopedia Britannica was still being published, while we were still awaiting equipment that could do augmented reality, geo-location, instant web searches from a phone, the dominance of Flash, the prevalence of Windows as the interface to computing – would be disastrous.

So far, when it comes to the internet and personal computing, there are no lasting truths – it’s a state of constant exploration, questioning, iteration, and response to the new. While there are still lessons from the past that can be applied – they no longer constitute ‘truths’ or tenets. They are now tools in my ever-growing toolbox.

A lot of professions are already well-suited to this. In law, precedents and decisions are constantly introducing new conceptual frameworks. In science and medicine, there is a constant stream of research, new techniques, equipment and drugs, and deeper understanding of the field. But, some professions have traditionally had longer arcs of permanence and refinement – and in those industries, adapting to change is harder.

In my industry, there are some hard-earned, powerful, and hard-to-shake truths that constitute a firm, even rigid mental model. For the last 60 years, the thirty second spot has been the center of marketing arts and sciences and marketers’ most powerful weapon. Many of the most significant advances and models in the industry are based on what practitioners have learned doing TV – digital briefs tend to be derived from TV briefs, story and magic and big ideas are the metric and baseline for evaluating the value of a digital idea, and emotionally-driven messaging is the default goal of most work. It’s so refined and so ingrained that it constitutes the mental model in large parts of the industry. So ingrained can a mental model be that it makes it nearly impossible to see things in a different light, or their own light.

The Fitzgerald quote above is a bit provocative in that it’s a judgement on intelligence. But it should be recast as a skill that we need to have. Personal computing, the internet, technological change are going to present new challenges and new possibilities to us at an unprecedented and ever faster rate – being able to hold several mental models in your head, simultaneously or serially as the situation changes – is going to be a key to survival and success.

Archimedes, Asimov, and Advertising

Somewhere in the blogo-/google reader/twitter-sphere, I recently came across a great Isaac Asimov quote that helps to capture my ongoing unease with the big idea:

The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ (I found it!) but ‘That’s funny”

You could probably swap in design for science and have a good description of where better comes from. A lot of great design work – from the iPod to Nike+ to OXO – comes from “hmm” moments. Unlike the divine inspiration of a muse, these moments come from humbler places: “I hate having to bend down or hold up the measuring cup to see if it’s the right amount”, “Why do I have to go to my computer to type in the running results I have on my racing watch?” “It sucks having to go to three different places to maintain my MP3 player and get songs for it.” Or even the simpler, “hmmmm, there’s got to be a better way”, or “do we still have to do it the way we have for the last five years.”

Truth, truth, Ideas, and ideas can come from small, “funny” places.

Quick hits on big ideas:

A post from me about design versus advertising creative.

A post on Luke Sullivan’s blog about big, long, and multi-sized ideas.

One other on my blog about Steve Jobs’s presentation of iCloud: “it just works.

Trendwatch(!): Synaesthesia is the New Apophenia

Great review in The New York Review of Books of V.S. Ramachandran’s Tell Tale Mind. Ramachandran’s books are depressingly, cripplingly, neurally reductive of everything we love about ourselves (appreciation of art, response to music, loyalty). But he writes so elegantly and cheerfully and with such engagement that I completely forget that elegance, cheerfulness and engagement are nothing more than Savanna-evolved modules of our brain and I just enjoy the reading, believing somehow that I am more than a strange loop or gadget.

This is a great review of what looks to be a fascinating book, but the passage that I loved was about the synaesthesia chapter. Synaesthesia is the tendency of an individual brain to connect sensory inputs from one sense in the physical world to another sense inside the brain: colors have smells, numbers have color associations, shapes and sounds have connections in the brain. Apparently (I didn’t know this), it has been a matter of debate whether this an actual condition or the product of conditioning (ie, someone’s wired that way, or has learned the association), so the first thing Ramachandran does is demonstrate that synaesthesia is a real neurological phenomenon (which is kinda cool).

Then, he argues through a series of experiments that it’s the result of “anatomical propinquity” – the physical proximity of certain sense functions to each other in the brain:

When a person with synesthesia perceives numerals there is an abnormal crossing over of nerve activity into the adjacent color area of the brain; the two areas are not insulated from each other, as they are in most people. One brain area excites the other, despite the lack of objective link between numbers and colors. In fact, it is surprising that this kind of thing doesn’t happen more often in the brain, because electrical potentials could easily spread from one area to another without something to damp things down.

And then the payoff from Ramachandran himself:

Thus synesthesia is best thought of as an example of subpathological cross-modal interactions that could be a signature or marker for creativity.

This is a neurological basis, I think, of one of my favorite words: apophenia, the tendency in the mind to connect unlike things to each other, the source of creativity or paranoia, depending on the result and the viewpoint of the observer. Most of us digerati learned the word from William Gibson’s Pattern Recognition, and now it has a rich, quirky basis in neurology! The anatomical propinquity of various functions in our brain, and the failure of certain parts of certain brains to “damp down” electrical potentials (or keep wires from getting crossed, almost literally) could be the (sadly reductive) basis of apophenia.

Depressed, but exhilarated.

Making Lists: Arbitrary, but useful, way of focusing

I do a lot of trainings and workshops for Boulder Digital Works. One of the trademarks of all the BDW programs is immersion and interaction. Every day, we try to have breakout sessions where people work together to solve a problem, figure something out, or brainstorm ideas. Then they report back to the larger group. One of the tricky pieces about breakout sessions and reports is that they can turn into simple reports of what the group talked about, simply capturing all the ideas that came up. That’s great if you’re looking to generate ideas, but if you’re trying to get groups to think differently, or try out new ideas and frameworks for thinking about things, or tee up conversations that help to figure out priorities, this can be a little loose.

I’ve recently started working constraints into breakouts. Remembering the Steve Jobs line about focus:

People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I’m actually as proud of many of the things we haven’t done as the things we have done.

In that spirit, a lot of breakouts I do ask groups to “to pick the top 3 things” or “choose one action in each of these three areas”.

Making lists create interesting dynamics that can create self-awareness, force people to expose their logic and thinking, and highlight otherwise overlookable differences between two groups.

The NY Times recently did an exercise in which music critics tried to figure out the 10 most important classical music composers of all time. In an area as snobby as classical music, there was a wonderful self-awareness of how ridiculous these kinds of exercises are, (see Dead Poets Society clip: “I like Byron, I give him a forty-two, but I can’t dance to it.”) while still embracing the fun of the conversation. Sports fans love to do make lists and baseball statistics nuts will argue endlessly about the tenth spot in a top ten list for greatest second basemen. Rob Fleming, owner of Championship Vinyl in High Fidelity has lots of memorable lists — though the nature of the list was usually funnier and more interesting than the list itself.

Still, lists can be fun and useful. Check out the NY Times greatest composer article to see how they can be fun.

First, the self-awareness of the silliness and the value of the exercise:

I began this project with bravado, partly as an intellectual game but also as a real attempt to clarify — for myself, as much as for anyone else — what exactly about the master composers makes them so astonishing. However preposterous the exercise may seem, when I found myself debating whether to push Brahms or Haydn off the list to make a place for Bartok or Monteverdi, it made me think hard about their achievements and greatness.

Wrestling with the difficult choice of numbers 2 and 3, the question was who comes out on top, Mozart or Beethoven?

The obvious candidates for the second and third slots are Mozart and Beethoven. If you were to compare just Mozart’s orchestral and instrumental music to Beethoven’s, that would be a pretty even match. But Mozart had a whole second career as a path-breaking opera composer. Such incredible range should give him the edge.

Still, I’m going with Beethoven for the second slot. Beethoven’s technique was not as facile as Mozart’s. He struggled to compose, and you can sometimes hear that struggle in the music. But however hard wrought, Beethoven’s works are so audacious and indestructible that they survive even poor performances.

This sounds remarkably like the Babe Ruth Barry Bonds debate the SABREMetrics folks did. The numbers for Bonds are ahead of Ruth’s (and longer), but Ruth was a pitcher, and even pitched in the World Series.

An interesting note how even setting the parameters can change the tenor of the discussion:

I’m running out of slots. In some ways, as I wrote to one reader, either a list of 5 or a list of 20 would have been much easier. By keeping it to 10, you are forced to look for reasons to push out, say, Handel or Shostakovich to make a place for someone else.

His top five, in order, were Bach, Beethoven, Mozart, Schubert, Debussy, at which point the next five got tricky.

In ranking the “dynamic duo of 19th century opera”, character came into the equation in terms of who comes out on top:

But who ranks higher? They may be tied as composers but not as people. Though Verdi had an ornery side, he was a decent man, an Italian patriot and the founder of a retirement home for musicians still in operation in Milan. Wagner was an anti-Semitic, egomaniacal jerk who transcended himself in his art. So Verdi is No. 8 and Wagner No. 9.

And when he got to number 10, he faced the problem of all the people who would never make the list: Haydn, Ligeti (I had never heard of him, so there’s another great thing about list-making, even the exclusions help you learn), Messiaen, Shostakovich, Ives, Schoenberg, Prokofiev, Copland, and Monteverdi . . . all in favor of Bartok

Make lists, force priorities and hard choices and get people to explain them. Lots to learn, lots of spirited conversation.

Practice, Craft, In Your Blood: “Man in a Blizzard”

Roger Ebert blogs, rather gushes on his blog about “Man In a Blizzard”:

This film deserves to win the Academy Award for best live-action short subject. (1) Because of its wonderful quality. (2) Because of its role as homage. It is directly inspired by Dziga Vertov’s 1929 silent classic “Man With a Movie Camera.” (3) Because it represents an almost unbelievable technical proficiency.

You can tell from the cinematography he knew exactly what he was doing and how to do it. He held the Vertov film in memory. Stuart must already been thinking of how he would do the edit and sound. Any professional will tell you the talent exhibited here is extraordinary.

The creator of the film writes to Ebert:

The simple answer as to how it was done so quickly: practice.

Most of the work I’ve done for the past half dozen years has been improvised online press-related shorts, which by nature requires a fast turnaround. Before that, I used to storyboard all my work — so I had a strong sense of film language. The trick is to step into situations, often without a plan, and try to make it look like it was all planned. For instance, when I first started doing work for Filmmaker Magazine, I had just done my NYFF44 series, and Scott Macaulay asked if he could see the scripts I used for the episodes; I had to tell him there weren’t any.

“In your blood” from the title is a reference to another blog post, where Ray Bradbury talks about reading Moby Dick dozens and dozens of times to prepare to write the screenplay for the movie version starring Gregory Peck.

Living (and balancing simplicity) With Complexity

livingwithcomplexitycover.pngFrom Donald Norman’s new book, Living With Complexity, some lovely passages about how we crave richness and complexity, but in ways that are manageable, revisitable, rewarding, satisfying and fun. Fits nicely with a recent post about the difference between complexity and complication, simplicity and simplicticism:

It is no great trick to take a simple situation and devise a simple solution.  The real problem is that we truly need to have complexity in our lives.  We seek rich, satisfying live, and richness goes along with complexity.  Our favorite songs, stories, games, and books are rich, satisfying and complex. We need complexity even while we crave simplicity.

… Some complexity is desirable.  When things are too simple, they are also viewed as dull and uneventful.  Psychologists have demonstrated that people prefer a middle level of complexity:  too simple and we are bored, too complex and we are confused.  Moreover, the ideal of level of complexity is a moving target, because the more expert we become at any subject, the more complexity we prefer.

Learn to Code Already … Rushkoff SxSW vid

I’m a big fan of people knowing how to code. Not in-depth, elaborate knowledge or every sorting algorithm, or alternatives to various ____ transforms, but the ability to handle variables, manage loops, create logic that yields something more quickly and accurately than pen and paper, or spreadsheet. Enough code to work with a dataset, rather than forcing your spreadsheet into a clumsy database that leads to mistakes and prevents interesting exploration. Enough code to visualize and play with ideas, enough code to create interfaces. The Shallows is showing us that our highly plastic brains (or minds) can be shaped by the simplest of technological habits and modes. So what about our passive relationship to the screens on our devices? If we always watch, watch, watch and click, click, click, but never take the initiative, what happens to us? I love Rushkoff’s notion that most of society lags behind its technology, leaving us with passive audiences who accept the technological result, and the elite who creates it. I’m late to the party and really struggling to get my hands on the damn book, but this is worth putting effort into.