Me on the Track Changes Podcast

A few weeks ago, my friend Paul Ford of the New York City product design studio Postlight invited me to appear on their podcast Track Changes, hosted by Paul and his co-founder Richard Ziade. It was a fun conversation that touched on a number of topics, including what I learned about designing products in startups and how that compares to my experience building products at big companies.

One of the biggest lessons I’ve learned in the past five years, having been in start-ups and been in pure software companies, is in contrast to the way I thought about products when I was at a design studio before, basically the little agency, or a big agency, or at a big company. It’s very difficult to pre-determine what a product is going to look like or feel like or even do from the beginning. And maybe the biggest truism that I’ve discovered about software products is they are the direct result of the people who work on them in the beginning, those very early formative stages.

That quote, which I copied from Postlight’s announcement, was transcribed more or less verbatim, I believe, so excuse the rambling—I sound marginally smarter when you hear it. The episode is just out today and you can listen via the SoundCloud embed above or grab it for your favorite podcast app at trackchanges.postlight.com.

+

Infographics for Geeks

The Batmobile by Tom Whalen

Later this month Mondo, purveyor of all kinds of pop culture goodness, is mounting an exhibition called “Info-Rama” at its gallery space in Austin, TX. The show features playfully creative infographics by artists Tom Whalen, Kevin Tong and Matt Taylor that “seek to illustrate and illuminate classics like ‘Back to the Future,’ ‘Teenage Mutant Ninja Turtles,’ ‘The Avengers,’ ‘Star Wars’ and many more.” My favorite, obviously, is this poster design celebrating the Batmobile from the Adam West-starrting television show from the 1960s.

More at mondotees.com.

+

Comparing Google Maps and Apple Maps

Google Maps vs. Apple Maps

Former Apple cartographer Justin O’Beirne is writing an extensive, detailed and even-handed comparison of Google Maps and Apple Maps. The first part is online now and it’s a fascinating read. In surveying what cities, roads and places each product displays at given zoom levels, O’Beirne shows that the two systems, whose visual presentations might be easily confused with one another, are in fact not very similar at all.

It’s clear that Google thinks transit is important, while Apple thinks that airports, hospitals, and landmarks are important. Two very different views of the world!

O’Beirne refrains from calling one or the other superior, but for me, the clear difference is that Apple Maps is concerned with cartographic integrity where Google Maps is concerned with the experience of using the application. That is to say, at most given zoom levels, Apple Maps presents formally better maps, but holistically, Google Maps presents more of the right information at the right time. Which is consistent with what animates each company: Apple is focused on beauty and elegance, and Google is focused on information delivery.

It may sound like I’m a Google Maps partisan, but in fact I prefer Apple Maps. I find that it’s easier to use, clearer, and its integration into the OS is a much nicer experience overall. That said, I acknowledge that sometimes the data that drives Apple Maps is less than optimal or inaccurate—my wife, like lots of people, would say it’s just plain bad. I disagree with that but I understand why she says it: Apple’s turn-by-turn directions can be unreliable, to put it mildly. The scope of O’Beirne’s analysis apparently won’t include turn-by-turn directions but for most people, that is where these maps live or die.

Read the analysis at justinobeirne.com.

+

New Ways of Tracking Time

Two new, interestingly designed ways to think about time. The first is Time Timer, which shows you the passage of discrete blocks of time using a “a patented red disk that disappears as time elapses.” This method makes it easier and less successful to focus on given tasks by making the remaining time look and feel more concrete. Time Timers come in clock-like form, as watches, and in app form—it’s a complete product line.

Time Timer

More at timetimer.com.

The second is Today, currently raising funds on Kickstarter. It’s a design for a wall-mounted timepiece that removes the hour and minute markers from the clock face and “simplifies the day into a perfect balance of dawn, noon, dusk, and midnight.” It has one hand that moves at half the speed of a regular clock, and makes a single, full rotation once per day. Its shading also helps you understand what part of the day it is.

Today Clock

More at kickstarter.com.

+

A Conversation About Fantasy User Interfaces

Fantasy UI by Cantina Creative

As a user interface engineer at Google, Kirill Grouchnikov brings real world UIs to life, but he devotes a considerable portion of his free time exploring the world of fantasy user interfaces—the visual design work that drives screens, projections, holograms (and much more exotic and fanciful technologies) in popular films and television shows. At his site Pushing Pixels, Grouchnikov has logged an impressive number of interviews with the designers who have created fictitious interfaces for “The Hunger Games,” “The Martian,” “Avengers: Age of Ultron,” “Kingsmen: The Secret Service” and many more. Each conversation is an in-depth look at the unique challenges of designing in support of fantastical narratives. As a both a committed film buff and a working designer, I find the conversations to be fascinating and revelatory; they shed light on working processes that are not unlike those of traditional interface designers but are also wholly different. I turned the tables on Grouchnikov recently to ask him about what he’d learn from surveying so many of these fantasy UI design practitioners, and what his thoughts are on the impact that this work has on both the design profession and the world at large.

Khoi Vinh: How did your interest in this kind of user interface design begin? Did it precede or follow your interest in the craft of “real” UI design?

Kirill Grouchnikov: It started sometime in 2011, which was when I fell in love with “Tron: Legacy.” Except for the first intro part that happens in the real world, I was obsessed with everything in that movie. I wanted to know more about how it was made, so I emailed Bradley “GMUNK” Munkowitz who did a bunch of visual effects for it, and that resulted in this article. It wasn’t an interview about screen graphics per se, as there are not that many of them in that specific movie.

But in my head I kept returning to it again and again, and I started finding web sites that were devoted to screen graphics and fantasy UIs – Fake UI, HUDS+GUIS, Interface Love, Sci-Fi Interfaces, Kit FUI and, most recently, the r/fui community on Reddit. I wanted to dig a bit deeper into what goes into making those interfaces: what is put on the screen and what is discarded, the explorations behind the decisions, how the advances in human-computer interactions in the real world of commercial companies and research labs find their way into the worlds of film storytelling, and how the fantasy UIs, in turn, find their way into the real-world explorations.

That last part is close to what I do for living—doing user interfaces on the engineering and implementation side. I love how fantasy UIs and interactions can plant seeds of “what if?” What if we could wave our hands around like in Minority Report? What would work and what would not? What if we could go beyond flat screens and operate on holographic representations of complex data? What if we could leave behind the decades-old mouse and keyboard way of “telling” the machine what to do, and find a less abstract way. This is what I love about movies like “Her,” “Ex-Machina” or “Iron Man.” They don’t have to accurately predict the future, but they could hint at where those interactions might go, and plant those seeds in the new generation of designers and engineers.

Visual f/x reel from Cantina Creative, who have created fantasy UIs for many popular films.

I want to get to that aspect where the fantasy bleeds into the reality in a bit, but first, what have you learned about the way these designs are crafted for the movies? Are there common processes that the designers go through to dream them up?

Some themes seem to be repeating again and again as I talk with people who work on these productions.

The overall process of designing fantasy interfaces has certain parallels to what we do for interfaces on our real-life screens. Fantasy design is there to support the story, much like the real design is there to support the product. You have people that define the overall visual language of the movie—the director and the production designer, with perhaps the cinematographer. Then it goes through iterations, starting with explorations and ideas, and then it gets progressively closer to the look that everybody is happy with—given the time frame and the budget of the overall project.

The most important thing that comes up in pretty much every interview is that screen graphics are there to support the story. You only have so much time (on the order of a few seconds) to succinctly tell the specific point and then move on to the next shot. Very rarely do you have the luxury of spending a significant amount of time on the UI. After all, you have the highly-paid actors on the set, so your screen time is better spent having them in the frame. Incidentally, this is where translucent monitors and holograms play well—allowing the director to have both graphics and faces in the same frame. But I digress.

So something like a giant red banner with flashing “Access Denied” is a necessity. You have to quickly convey the point and not require the viewer to somehow locate the small notification somewhere on the gigantic movie screen.

And finally, there’s a desire to have something new and fresh on the screen. This is part of the creative side of the movie business; to not repeat what has already been done, even if you’re working on the next installment of a well-established franchise. This is where designers might go hunting for something new that is being explored in the labs, or for something old that can be dusted off, or for something borrowed from the natural world, like the curved elements in “Prometheus” or “Jupiter: Ascending.”

In the case of that “Access Denied” example, what have you learned about how these designers balance the need to tell the story with plausibility and verisimilitude? There are degrees of storytelling, of course, and I think a lot of UI designers watch movies and often see stuff that’s just so clearly unrealistic that it takes us out of the experience.

I think it’s a balance between three things which are intertwined. I personally don’t dissect any particular story as being too “far out there.” As long as it establishes a consistent universe and doesn’t break the rules that it sets for itself, I am ready to believe in those rules. It’s only when the script takes a turn or a jump that is completely unsubstantiated—once again, in the framework of rules set up by the script itself—that it takes me out and I start looking at things with a more critical eye.

The second thing is that everything we see on the screen needs to support the story, first and foremost. I don’t expect a character to get stuck on a bunch of loading screens as they are interacting with their devices. It might not be too plausible for the technology that we have today, but the story needs to keep moving and not get stuck on whatever imperfections are “supposed” to stand in its way. Once again, I’m quite fine with jumping over those imperfections as long as they are trivial in the confines of the universe set out in the specific story.

And this brings me to the last point: matching the technology to the time of the story. So if it’s something like “Prometheus,” “Oblivion” or “The Expanse” that happens far enough in the future, I think we as the viewers need to be prepared to be open to technology being orders of magnitude ahead of where we are today. To draw an awkward parallel, imagine showing a documentary on today’s devices to Alan Turing. I honestly don’t know how plausible or believable he’d find what we, today, just take for granted. And then, on the other hand, interfaces in the Marvel universe or the “Mission: Impossible” and James Bond franchises can’t take too big of a leap. The action in those movies happens today, with the only difference being the budget and human talent available to the characters in them. In such movies designers can’t really go too far beyond the edge of what we have today, and I think that’s one of the factors that figures into the decision-making process behind not only the design and the screens, but the portrayal of technology itself.

There’s another aspect to advancing the story that I’ve seen a lot in the past two decades; often, a computer interface serves as a kind of a crutch for the plot. I always balk when a tense moment relies on a progress bar getting to 100% or something; it really feels like the screenwriter didn’t really do his or her job of creating a legitimately compelling dramatic challenge for the protagonists. As these UIs become more elaborate and more visually stunning, what are your thoughts on scripts becoming more and more dependent on them to tell stories that would otherwise rely on good old fashioned plot development?

I’d say that bad writing has always been there, way before computer interfaces. A tense moment that relies on transferring some data at the very last second used to be a tense moment that relied on the hero cutting the right wire at the very last second before the bomb explodes, or a tense moment that relied on the hero ducking into some random door a second before they’d be seen by the security guard, an explosion that takes out an entire planet when our heroes are right at the edge of the blast radius, and a myriad of other similar moments.

We’re witnessing an unprecedented—at least in my view—explosion in consumer technology available to people all around the world. It is becoming hard to imagine a story told in a feature film or episodic television that is happening in modern days that does not have a screen or two in it. The stories necessarily reflect the world that we live in, and I think that a story that doesn’t have technology in it would need to actually justify that decision in some way.

Good writing that creates a tight plot with no gaps or unsubstantiated “leaps of faith” is hard, and it’s always been hard. Technology in general, and devices and their screens in particular, are indeed used more to paper over those gaps. Of course for us it’s ridiculous when somebody talks about creating a GUI interface using Visual Basic to track an IP address. But I think the writer that came up with that line would have come up with similarly hand-wavy way to advance the story forty years ago, relying on some random tip from a gas station clerk or the overused trope of “killer always shows up at the funeral, so this is where we’ll catch them.”

The worst of this bunch for me was the climax scene in “Independence Day” where a human-made computer virus was able to take down all the alien ships. The same guy who wrote and produced that movie is doing the upcoming sequel, but I hope that we’ll see something a bit less inept this time around.

It’s a fair point that there’s always been bad writing. I guess the difference though is that in the analog age, very, very few people ever actually had to defuse a bomb. Whereas today, everyone uses phones, laptops and who knows what kind of screens on a daily basis. So a screen as a plot device is much more familiar, yet it seems like the way a lot of movies overcome that quotidian nature is by trying to make them seem more fantastical, rather than trying to make them seem more believable. In some ways, I feel like everyone learned the wrong lessons from “Minority Report,” which made a really concerted effort to craft plausible interfaces. Other moviemakers just went off and made over-the-top UIs with no basis in research or theory. Am I overthinking this?

How many times have you seen somebody—a good guy or a bad guy, depending on the situation—shooting a desk monitor or even a laptop screen, implying somehow that this is the way to destroy all the information that is on the hard disk, when that disk remains completely unharmed by those bullets? Or even worse, in our age where everything is in the cloud, there’s no point in harming those screens to begin with.

What I’m trying to say is that screens themselves are just a skin-deep manifestation of the information that surrounds us. They might look fancy and over-the-top, as you say, but that’s a pixel-level veneer to make them look good for the overall visual impact of a production. I think that most of what those screens or plotline devices are trying to do is to hint at the technological capabilities available to people or organizations who are operating those screens. That’s where it goes to what I was saying earlier: the incredible advances in technology in so many areas, as well as the availability of those advances to the mass consumer market.

If you look at a movie such as “Enemy of the State” from 1998 and the satellite tracking capabilities that it showed us, that was pretty impressive for its time. Back then GPS had a very significant difference in accuracy available to military devices (PPS) and civilian devices (SPS); publicly available signals had intentional errors in the range of 100 meters. That limitation was lifted two years after the movie came out (aka correlation vs. causation), and now hardly anybody would be impressed by being able to navigate your way without having to struggle with a foldout map. And of course, now that mobile phones sell in the billions, everybody can be tracked by triangulating signals from cell towers. That’s not impressive anymore, so that’s an example of a crutch that has been taken away from the script writers.

I don’t think that the question here is about how plausible the screens are, but rather how plausible the technology that those screens manifest is. So you’d be talking about the AI engine that is J.A.R.V.I.S. in “Iron Man,” or the AI engine that is manifested as Samantha in “Her,” or being able to track the bad guys via thermal scans in a 3D rotating wireframe of a building in any number of action movies, or even the infamous zoom-rotate-enhance sequences in low-budget procedural crime drama on television. The interface bits are just the manifestation of the technology behind the screens. Are we close to wide consumer availability of J.A.R.V.I.S.-like software? There are certainly a lot of companies working on that.

When we look at the devices around us and see all the annoying bits, and then see those annoying bits not being there in a feature film, that is quite believable in my opinion. So something like not needing to charge your phone every evening or getting a strong mobile signal as you’re driving on a deserted road gets a pass. When we look at the devices around us, and see those capabilities pushed just a few steps beyond the edge of what we see right now, that is quite believable as well. Especially with the recent revelations on the state-level surveillance programs, I as a viewer am not surprised to see similar technology taken a couple steps forward in Bond, Bourne or other similar spy-action thrillers.

Okay so let’s talk about how these fantasy interfaces are bleeding into reality. You cite the “Enemy of the State” example, which was prescient by two years. “Minority Report” and “Her” and J.A.R.V.I.S. from the “Iron Man” films are all often cited as being very influential. Are these fantasy UIs actually driving real world ideas, or are they just lucky? And are the designers who are creating these fantasy UIs aware of this “life imitates art” cycle?

“Minority Report” certainly benefited from the very impressive body of research that went into trying to predict the world of 2054 and its technology. I would say that that movie is by far the most influential in terms of how much real-world research it has ignited ever since it was released.

The AI engines in “Her” and “Iron Man” seem to hover just a few years ahead of what is happening right now in the world of machine learning and information analysis. You look at speech recognition and assistive technologies being developed by leading tech companies, and it would seem that the portrayal of speech-based interfaces in movies feeds itself on those real-tech advances. As for the AI capabilities themselves, there was to be a lot of overpromise and underdeliver in the ’80s and the ’90s. And then you have AlphaGo beating one of the world’s best human players in a game that looked to be unreachable for machines just a few short months ago. Of course, that’s still not the general-purpose artificial intelligence that can serve reliably as your companion or advisor, and that one is still in the realm of science fiction for now.

You might call it prescience or luck when a movie correctly portrays evolution of technology that was a few years out. But for every such movie, you have something like “The Lawnmower Man” that was made smack in the middle of the hype-wave around virtual reality. It took the industry a couple decades to get to the state where we can actually start talking seriously about augmented, virtual or mixed reality on mass scale. And even now it’s not clear what would be the “right” interaction model, both from hardware and software perspective.

As I mentioned earlier, I love how movies can plant seeds of ideas in our heads. People designing for movies and TV shows take seeds of ideas from the real world, and build their interfaces from those seeds. And then it flows the other way. Those seeds that have blossomed into something that we don’t have through a process of mutation and combination now plant their own seeds in the minds of people that ask themselves “What if?” What if I could have a camera mounted on my screen that would track my fingers, my eyes or my overall gestures? What can that enable, and how that would actually evolve in the next five, ten or even twenty years?

There are all these grand predictions from self-proclaimed futurists about where we will be in fifty years. They sound quite attractive, but also completely unverifiable until we get there. Go back to 1973 when Motorola made DynaTAC and show me one documented, correct prediction of where we are now with complete domination and unbelievable versatility of mobile devices. In some movies art imitates life, or at least takes it a few steps forward. Some movies might be so influential as to spur interest in bringing their art to life.

Martin Cooper who developed the first mobile phone at Motorola in 1973 said he was inspired by the communicator device on “Star Trek.” It then took a few decades of real-world technological advances to get to the stage where everything now is just one slab of glass, which evokes another device from “Star Trek”—its tablet computers. And if you look at the universal translator from the same TV show, the technology is almost there, combining speech recognition, machine translation and speech synthesis. And when—not if—that entire stack is working flawlessly, it will be just a small part of some kind of universal device that will evolve from the mobile devices that we have now. So in a sense, life doesn’t imitate art, but is rather inspired by it, and takes it to places that we didn’t even dream of before.

So is “life imitating art” the best way to judge fantasy UIs, then? As you’ve said, the first priority for this work is supporting the narrative, but over time, the more believable ones are the ones that seem to retain their respectability—or to put it another way, the least believable ones just start to seem ridiculous and even laughable. How should we be evaluating this work?

“Evaluate” is a bit loaded. I don’t know if you’d really want to reduce fantasy UIs to a single number like MPG for cars or FICO score for credit risk. There are just too many aspects to it. I think that first and foremost it needs to support the narrative. And that’s true for any other aspect of storytelling in movies and TV shows. If anything, and I mean anything, takes you out of the story, that’s bad craftsmanship. Everything needs to fit together, and this is what amazes me so much in movies and shows that are done well: to see dozens or even hundreds of people come together to work on this one thing.

And then, as you’re saying, there are things that stay with you afterwards. Things that are particularly compelling, be it “The Imperial March” from the original “Star Wars” trilogy, the cinematography of “Citizen Kane,” or the fantasy UIs of “Minority Report.” You might also say that a lot of times these things are timeless, at least on the scale of a few decades. It’s a rare thing, really, given the pace at which the world of technology is evolving. They don’t have to accurately predict that technology around us, but rather present, like you say, something that is believable not only in the particular movie universe they were born in, but also in the world around us. I certainly wouldn’t mind having J.A.R.V.I.S. in my life. Or at least to experience what it would be like to have such an intelligent entity at my disposal, to be able to judge its usefulness by myself.

Looking ahead, a lot of people predict that VR will become a viable form of cinematic storytelling. If one buys that, how will it impact the work of crafting fantasy UIs? Crafting something believable but still fake seems more difficult when it can be experienced from any angle or even at any distance.

It’s like you’re reading my mind, because I’ve been thinking about this a lot recently. There’s obviously a lot of interest in augmented reality, virtual reality and mixed reality in the last few years. Most of it for now seems to be concentrated in exploring that world in the gaming industry. This particular one isn’t of much interest to me, as I moved away from it when my kids were born. There’s just not enough free time left in the day to split between games and movies, so I chose movies!

I keep on thinking about the early years of making movies and telling stories in that medium, about how people first started taking what they knew from the world of stage theater and applying that in film. You had long uninterrupted takes where the camera wouldn’t move, wouldn’t pan, wouldn’t zoom. It was like watching a theatrical show as a viewer sitting and not moving. It took a few decades and a couple of generations of movie makers that were “born,” so to speak, into the medium of film, to find a completely new set of tools to tell the stories on film. Some things didn’t work for the viewers, and some things took perhaps a bit of experimentation and time to refine so that the viewers could understand certain shortcuts and certain approaches to convey specific intentions.

I honestly can’t tell if VR will be a big part of the future of “cinematic” storytelling. If you look at the last decade, 3D has been hailed as the future of the medium. But it’s been abused so much that we as the viewers are very wary of seeing the 3D sticker slapped onto a production as almost an afterthought, as a way to add a couple of bucks to the price of the ticket without adding anything substantial to our experience.

My hope is that VR will have a bit more time and a bit more patience given to it by the big studios. It needs a lot of both to develop the vocabulary needed to tell the stories in a format that feels native to that experience. Your question is about the fantasy UIs and screen graphics, but that’s a very small part of the overall craft of telling a story. There will undoubtedly be a lot of false starts where people are eager to get on the VR bandwagon and they will just take the existing ways of telling stories and apply them as-is to this new world. If there’s too much of this, this whole thing will be dead before it has a chance to develop itself as a completely new way of storytelling.

But if it evolves at a slower pace, perhaps starting with shorter productions, then I hope that in my lifetime I will get to experience stories that are presented in a completely different way from what we are used to nowadays. It doesn’t mean that it has to completely supplant what we have now—stage theater is quite alive even after a century of movies. It might not be something like a “VR movie”; it will be a completely different experience under a completely separate name. If that happens, the way user interfaces are presented will evolve along with it. And you know, who knows what will happen to how we interact with machines and information in the next few decades? Maybe we will have screens—rectangular, arbitrarily shaped, translucent, transparent. Maybe it will all be holograms, maybe every surface around us is a screen, and maybe there will be some breakthrough and there are no screens. Maybe a screen is just an unnecessary intermediary whose days and years are counted.

So by the time we get to true VR experiences in cinema, fantasy UIs will be hinted at along the lines of “Her” and that will be completely natural to the current world at that time because screens will be long gone. It’s just too far ahead into the future, to be honest.

+

Control Devices by Drawing

Barcelona design student Marc Exposito created this impressive project as part of his thesis: a universal control interface for Internet of Things devices that uses simple drawing as a means of user input. If you want to turn on a light, you draw a primitive shape that represents the light (exactly which shape is apparently user-definable; in the video, the user points the iPad’s camera at a lamp and draws a circle over it to link the object and symbol) and draw a simple line with a dot at the end to turn it on. To change the color of the light, just select a color and scribble inside the circle. Multiple objects can be controlled at once, all via this simple visual vocabulary. Pretty impressive.

More at fastcodesign.com .

+

Noria “Redefines” Window Air Conditioner Units

Noria Window Air Conditioner
Noria Window Air Conditioner

When I wrote about the staid state of window air conditioner units a few years ago, I was surprised to find that it resonated with many readers everywhere. In spite of the superiority of central air, tons of people still have to manage with self-installable window units which, as I wrote, had not changed in decades.

To me, this is one of the enduring mysteries of contemporary industrial design, which has over the past twenty years sought to reinvent, redesign or elevate out of commodity status almost every object in the home, from vacuum cleaners to thermostats to toaster ovens. The closest thing to innovation that the AC market seems to have produced is so-called ductless air conditioning, but those units don’t address the problem that most Westerners want to solve with window units: cool a room with a machine that costs less than US$1,000.

This widespread general interest bears out in the Kickstarter campaign for Noria, a new project that aims to “redefine” window air conditioners. With just over a week left to go, the project has already raised nearly three times its funding goal. That’s a clear sign that it addresses a real need in the market.

https://www.youtube.com/watch?v=nktasCqHjzg

Noria claims to be less than six inches tall and forty percent smaller than standard units. This allows it to be stored and installed easily (the video demonstrates the process, and it looks like a revelation in terms of ease of use). Critically, this also means that when it sits in the window it does not block the view, a huge humanistic benefit that should not be underestimated. It seems like a huge improvement over standard units, of course, but it appears to be miles ahead of other recent, similar attempts like Quirky’s Aros air conditioner too. I hope that when it ships it lives up to all this amazing promise.

Find out more at kickstarter.com.

+

Advice for Founders Looking to Hire Their First Designer

I often meet entrepreneurs who are trying to get their startups off the ground and who are looking to find the right designer to add to their founding team. They’re usual pretty savvy about the importance of the craft and have an honest desire to build world-class design-centric products. What they need help with is finding that key first designer, the one who will create the foundation for their product and brand.

The smarter ones realize that they have to immerse themselves in the field and learn its contours. They know that it takes patience and persistence to meet the right designers, that they have to learn where they gather (online and offline), how they talk to one another, and what motivates them. Good founders take the same approach as they might follow in recruiting a chief technology officer or a head of sales: they immerse themselves in that profession so that they can understand the difference between a good candidate and a bad candidate.

Where I see even some of the smartest of these folks go astray, though, is in the kind of designer they’re looking for. I’m not even talking about whether they should be hiring product designers or branding designers or some hybrid of those skills. Rather, I’m talking about what role they want their first design hires to fill.

Many times, what I see is a reluctance to hire designers into key leadership positions. There’s a mistaken belief that if the work that needs to get done is fairly tactical it therefore requires someone to simply execute the tasks. It’s true that in the beginning of a startup‘s life, the design tasks can be largely the same regardless of what level designer you have: build the first product, establish the processes for design, develop the brand, put up the first web site, etc.

But even on a small team, it can make an enormous difference to the company’s culture and DNA whether the first designer to come aboard is at the staff-level or at the leadership level. The way a staff designer solves these problems can be very different from the way a design leader solves them. Staff level designers will tend to focus on the nuts and bolts; design leaders (good ones) will think about the short- and long-term journey of the company itself. This includes the nuts and bolts, but also takes into account a host of other factors that are natural outgrowths of a team member who is a peer to the company’s other leaders. This means that when a design leader is thinking about design, he or she is always thinking about recruiting, business strategy, partnerships, markets, capital, and more as well.

The conundrum is that while many founders see the wisdom in hiring leaders for technology, operations, sales and other aspects of the business, when it comes to design, they have a different approach. They’re more often than not content to hire someone with a more junior level of experience to play a more limited role in the formation of the company. That’s an understandable but shortsighted incongruity.

In politics, they say that “personnel is policy,” meaning who you hire determines what you do and how you operate. That’s just as true for startups, where products are the direct result of who works on them. There are of course countless ways to build companies, but if you want to build a truly design-centric business—which in many ways has become the baseline for startups today—then it behooves you to have a design leader on your team from the outset.

+

How Does a Film Editor Think and Feel?

Tony Zhou, creator of the incredibly insightful series of video essays about film “Every Frame a Painting,” takes a look at the subconscious instincts and conscious thought processes that drive effective film editing. Zhou is an editor, and he brings a wonderful frankness to the intangibility of his craft. In this age where technology claims to be able to measure nearly anything, it’s refreshing to be reminded that some art forms, like editing, are inherently ummeasurable.

+

Pay What You Like for This New Typeface from Latinotype

Branding Examples

A very intriguing experiment from the talented folks at Latinotype: until 25 May, you can buy all fourteen styles of their new typeface, called Branding, for any price that you like—so long as it’s least US$1. I hope the company will be forthcoming with the data, or at least the macro-trends, from this sale, as I will be very interested to hear how people value this opportunity. It helps too that Branding is friendly and attractive—a terrific design. Kudos to the team for this adventurous trial.

Learn more at latinotype.com.

Branding Font Styles
Branding by Latinotype
+