is a blog about design, technology and culture written by Khoi Vinh, and has been more or less continuously published since December 2000 in New York City. Khoi is currently Principal Designer at Adobe. Previously, Khoi was co-founder and CEO of Mixel (acquired in 2013), Design Director of The New York Times Online, and co-founder of the design studio Behavior, LLC. He is the author of “How They Got There: Interviews with Digital Designers About Their Careers”and “Ordering Disorder: Grid Principles for Web Design,” and was named one of Fast Company’s “fifty most influential designers in America.” Khoi lives in Crown Heights, Brooklyn with his wife and three children.
Last week Kickstarter launched Drip, “a tool for people to fund and build community around their ongoing creative practice.” This new service is a complement to the company’s original model; where “classic” Kickstarter helps people fund projects, Drip aims to fund people.
At its heart Drip is essentially a subscription service inflected to support the creative pursuits of “artists, authors, game designers, musicians, and filmmakers.” It’s worth noting that that list, quoted from the announcement blog post, emphasizes artists—and conspicuously fails to mention the technologists and product creators who have thrived on Kickstarter. This seems like an attempt to get back to the company’s original goal of developing a funding model for the arts, which over time has become somewhat diluted by the platform’s surprising effectiveness as a launching pad for products and businesses.
Drip also has an interesting take on how to do this: each campaign begins with a “founding membership” phase that last anywhere from a week to a month. Anyone who subscribes during this period is designated as kind of special patron and may be offered special rewards for their early participation. The idea is to drive demand early on so as to start off each artist with maximum momentum.
The company’s three launch videos are also notable in that they exclusively feature women:
Learn more at d.rip. Oh, also, Kickstarter just redesigned its brand identity so that it’s both thicker and more bubbly while also opting for a more subdued flavor of green.
Back in September Apple made a welcome improvement to iTunes by automatically upgrading, at no cost, its customers’ already-purchased movies to 4K resolution. This was laudably customer-friendly but I also found it to be a savvy move in that it underscores the value of owning media.
It’s notable that movies and TV shows haven’t followed the path of music, at least not yet. Video media was never as thoroughly decimated by digital piracy as audio was, and so as a result we don’t have a “Spotify for movies,” a comprehensive (or nearly so) streaming catalog available for cheap. The closest we have is Netflix which has never been complete and, over time, continues to become even less so.
You can argue whether that’s ultimately good or bad for consumers (who wouldn’t want a Spotify for movies?) but at a minimum, the ownership of films allowed under the current model preserves a meaningfulness that has largely disappeared from music. I rarely buy albums anymore because I can listen to nearly anything I want at any time I want on Spotify. That’s a tremendous luxury but the flip side is that I don’t care about music nearly as much anymore, and I don’t really feel like any of it is “mine.”
By contrast, I do feel a certain pride of ownership over the movies in my collection, whether they’re digital or on physical media. These are movies that I’ve selected to be part of my own personal archive, that I plan to return to again and again. In this way I have a kind of relationship with them; they’re much more a reflection of who I am and what interests me than the albums I’ve assembled in Spotify.
To that point, it occurs to me that Apple could go even further in emphasizing the value of ownership by helping their customers convert rentals to purchases. If you rent a movie on iTunes, Apple should offer to let you pay the difference between the rental price and the purchase price to actually own it. To keep this reasonable, this offer could be limited to the rental period, which Apple also increased to forty-eight hours back in September. That’s the perfect amount of time to let a customer upgrade her transaction because it creates a useful urgency to the value. I posted this on Twitter over the weekend and was surprised by how many people seemed to think it was a good idea.
If you rent a movie on iTunes, they should offer to let you pay the diff to fully buy it within the 48 hr rental period.
Like the 4K upgrade, this would go a long way towards removing a key piece of friction in the ownership model. With analog media, there was no practical way of allowing a customer to get credit for a previous transaction involving a specific piece of media. Whether you saw a movie in theaters or rented it from Blockbuster it didn’t matter because when you decided you wanted to own it for yourself, you were back at square one—you’d pay exactly as much as someone who’d never seen it before. In digital media, especially with end-to-end buying and playback systems like iTunes, this is now relatively trivial. Giving credit for previous transactions would go a long way towards cementing the relationship between the creators of media and the consumers of media, and in the case of this suggestion I can only imagine that it would spur more purchases and generate more revenue.
Last month while in Las Vegas for Adobe’s annual MAX conference, I moderated a panel on design entrepreneurship. The notion that designer co-founders can provide a competitive edge to major new companies is roughly half a decade old now, following the breakout successes of companies like Airbnb, Kickstarter and Pinterest.
The premise of this panel was to check in on that idea, to see how much has really changed for the designer co-founder in that time. The best way to do that, I thought, would be to talk to the designer co-founders who are building new, up and coming companies today.
To that end, I was lucky enough to be able to recruit three superb entrepreneurs: Joey Cofone, designer, CEO and co-founder of Baron Fig; Tiffany Chu, designer and co-founder of Remix; and Tricia Choi, designer and co-founder of MoveWith. To round out the panel with perspective from the investment community, I also invited Enrique Allen, co-director of Designer Fund, to join us. It was a fascinating discussion, especially when we debated the topic of whether a designer co-founder is likely to build a fundamentally different kind of business from other kinds of entrepreneurs.
The panel was live streamed on AdobeLive and Enrique wrote a great post on it that you can read over at designerfund.com. You can watch the replay above, and also peruse AdobeLive’s archive of fantastic video content at be.net/live.
For years we’ve groaned every time a character in a movie commands a computer to “Enhance!” a low resolution image, and then watched as an implausibly clear, high resolution replacement appears before our eyes. For me, this has always been one of the worst kinds of lazy storytelling; it always suggests a fundamental lack of understanding of how digital imaging works on the part of the moviemakers.
Well as it turns out, the work of scientists over at the Max Planck Institute for Intelligent Systems in Germany may ultimately give technologically clueless film directors from the 1980s and 1990s the last laugh.
The picture is downsampled, reducing the data to this pixelated state:
That image is then processed with their “ENet-PAT” method and results in this:
I’m not sure this approach can resolve a grainy image of a face into something instantly recognizable, which is usually what one sees in films, but this example is stunningly effective nevertheless.
To give some context, the resampling techniques most of us are familiar with from Photoshop and other image editors generate new pixels and details strictly from what’s available in a given low-resolution image. That results in glaringly unconvincing results that are usually either overly smooth or pock-marked with unsightly image artifacts. By contrast ENet-PAT uses machine learning techniques to teach a neural network how to best guess what details should be added to an image. Once trained, the system can produce reliably believable results like these. Scary.
The only movie I saw in theaters in October was Denis Villeneuve’s “Blade Runner 2049.” Apparently, a lot of people did not like this weirdly brainy blockbuster, as its poor performance at the box office may result in as much as an US$80 million loss to producers. Ouch.
“2049” is certainly not without its problems, I’ll admit. And yet I still found it almost entirely satisfying, a worthy sequel to Ridley Scott’s original which of course was also a tepidly received financial disaster. That one just happened to go on to become one of the most beloved and influential science fiction films of all time. In fact, I re-watched the so-called “Final Cut” of the 1982 original in preparation for the sequel and was astounded by how much more I liked it than the last time I watched it—I’ve probably seen it five times by now. And, if I’m honest, I enjoyed it substantially more than the very first time I saw “Blade Runner” in the 1980s, on home video. It has never been hard to appreciate the beauty of the original but the totality of Scott’s aesthetic vision, while undeniably impressive, was so well executed that it invited suspicion. A film that gorgeous couldn’t also be good, could it? Turns out, the answer is yes.
Villeneuve’s sequel is also an almost indescribably gorgeous piece of filmmaking, thanks in no small part to the contributions of its cinematographer, living legend Roger Deakins. (It should be noted that Deakins is one of the very best visual artists working today, in any medium.) Beyond that though, I was as impressed as ever with Villeneuve’s idiosyncratic staging and pacing, and the unique way he is able to coax candidly off-kilter performances from his cast. He could make a short film about something as mundane as a mailman on his daily route and it would be something nobody has ever seen before, if not an artistic revelation. There is so much good stuff in “Blade Runner 2049” that I can’t imagine it hasn’t been doomed to the same fate as the original: initial failure followed later by widespread acclaim. Then again predicting the future is a great way to be wrong, which is how most sci-fi films end up.
I also saw a twelve other movies in October. Here they are:
“Obit.” A wonderful look at death. See more of my thoughts in this post.
When people ask me why I joined Adobe, my answer is simple: there are things that I get to work on here that I would never get to work on elsewhere. Here’s one of them: Adobe is putting a major new emphasis on positively impacting diversity and inclusion in the creative industry, starting with a significant report that we released just last week. I’m very proud of having worked on this effort from its inception earlier this year. You can read the report below and learn more at this site, and also read an article about it over at AdWeek.
Why are we doing this? Adobe is a tech company but we are in the creativity business, and that means we take as our primary concerns certain things that other companies can’t afford to foreground. There are many businesses for whom creativity in its many forms is important, maybe even critical, but Adobe is the only multibillion dollar company out there who is expressly interested in the core problems that creative professionals like me—designers, illustrators, photographers, filmmakers, artists of all kinds—encounter every day in making our work.
More than just our historical focus on tools though, we think a lot about what makes creative professionals successful. Our company’s unique dichotomy of tech and creativity gives us a different perspective on the issue of who gets to do creative work professionally. To be sure, there are many companies undertaking meaningful initiatives to create more diverse and inclusive workforces, but these efforts almost always tend to be seen through a tech lens.
While there is of course a significant overlap between tech and design, not all designers or design teams can be said to be part of the tech industry. This is even more true for illustrators, photographers, filmmakers and artists. You don’t have to Google very extensively to find research and discussion on diversity in tech (which is a good thing), but there’s not much out there that looks penetratingly at diversity and inclusion in creative fields.
Adobe is aiming to change that. Creative organizations have unique challenges when it comes to these issues, and as an industry it’s important that we understand the nature of these challenges as part of our own experience, something that we ourselves can impact, and not as something that gets rolled up alongside other disciplines like engineering, product management, sales, etc.
Maybe one of the biggest misconceptions about how the creative industry works is that, because our crafts are in many ways premised on unconventional thinking, we are already a diverse industry. In tech companies, the design team is often the most diverse group in the org. Our study does in fact show that a vast majority of us believe in the benefits of diversity and inclusion, believe that it makes our work and our industry better. I’ve been in this field for a long time and I’d wager that almost everyone I’ve ever met professionally would agree with this. That’s the good news.
On the other hand, over the course of my career I’ve worked with only a small handful of creative directors who were women, and with vanishingly few designers of any level who were of African American, Hispanic, or Native American backgrounds. In my experience at design conferences and events all over the world, the audiences are overwhelmingly white and the speaker roll calls only marginally less so.
These experiences are reflected in the report that we did. It’s based on a survey of a sample group of seven hundred and fifty creative pros as well as a series of qualitative, in-depth interviews with people like Ian Spalter, head of design at Instagram, Gina Grillo, CEO of the Ad Club, and Jacinda Walker, chair of AIGA’s diversity task force.
What you’ll see is that there are stark numbers among women who feel that the leadership in their design organizations are diverse, and who feel that their gender will hamper their career growth. Creative professionals of color are significantly less likely than their white peers to feel that their contributions are valued, and many fewer minorities than whites graduate from university programs in creative fields. Maybe most tellingly, only half of those surveyed, regardless of gender or ethnicity believe that the industry as a whole has made sufficient progress in becoming more diverse and inclusive over the past half decade.
For my part, this has been a passion project on which I’ve been very fortunate to be able to spend a significant portion of my past nine months at Adobe. Like many of us in the design field, I have in the past been guilty of underestimating our industry’s diversity and inclusion challenges. When I was starting out in design, to some extent this issue felt like it was a solved problem, or one that was just on the edge of resolving itself.
As an industry, we’ve spent most of the past two decades arguing for a seat at the table for any designer, without thinking deeply enough about who among us, gender- and ethnicity-wise, gets a chance at that seat. And of course my personal experience is made more complicated by the fact that I’m an immigrant and an Asian American—in the broader sense of American culture I’m a minority but within the design industry there are any number of Asian Americans on teams everywhere. Diversity is a richly complex subject and there are no easy answers.
That’s why I feel so fortunate to be a part of this team at Adobe who are similarly motivated to move the needle on this issue, regardless of its scale and complexity. This report is just the first step—a baby step. Its intent is to raise the volume on this conversation, to put some facts and figures and quotes out there on the specific intersection of professional creativity and diversity and inclusion. Going into 2018, you’ll see Adobe build on this further with commitments both internally, within our company and products, and externally, to the design and creativity communities at large—we’ll be both recalibrating current initiatives for even more emphasis on these issues as well as rolling out new ones. We hope to do some good, and that’s why I work here.
Apple’s new iPhone X was released just this past Friday and you can read any number of reviews of it right now—my favorite are from The Verge, The Wall Street Journal, The New York Times, and Six Colors. I was lucky enough to get an iPhone X too and you can read some of my thoughts on the device a little further down.
However I’ve come to believe that there’s at least one thing wrong with this whole notion of product reviews—and with smartphone reviews in particular—and that’s that by and large they’re only ever interested in these phones when they’re brand new.
When an iPhone debuts it’s literally at the very peak of its powers. All the software that it runs has been optimized for that particular model, and as a result everything seems to run incredibly smoothly.
As time goes on though, as newer versions of the operating system roll out, as there are more and more demands put on the phone, it inevitably gets slower and less performant. A case in point: I’m upgrading to this iPhone X from a three-year old iPhone 6 Plus and for at least the last year, and especially over the last three months, it has struggled mightily to perform simple tasks like launching the camera, fetching email, even basic typing. People who have recently had the misfortune of having to use my phone tell me almost instantly, “Your phone sucks.”
You could argue that three years is an unrealistically long time to expect a smartphone to be able to keep up with the rapidly changing—and almost exponentially increasing—demands that we as users put on these devices. Personally, I would argue the opposite, that these things should be built to last at least three years, if for no other reason than as a society we shouldn’t be throwing these devices away so quickly.
But even if you disagree with me, even if you’re the kind of person who upgrades to a new phone every year, I think you’d still agree that it would be useful to know how well these devices hold up after one or even two years.
Now, I know it sounds kind of counter-intuitive to read a review of a product a year or more after everyone who would consider buying it has already bought it. But imagine if the sites and publications that review these products did make it a habit to revisit them down the road. Imagine if twelve months from now you could read about how well today’s iPhone X holds up with iOS 12, and also with whatever slate of third-party apps that can reasonably be understood as essential—the 2018 versions of Instagram, Spotify, Twitter or whatever. Imagine that at regular intervals we could see benchmarks on a freshly restored iPhone X running the latest software and getting a quantified and qualified idea of how well that piece of hardware has aged over time.
If reviewers revisited these products in this way, it would give us a whole new dimension of understanding. It would tell us how well-designed these phones really are, whether the manufacturers really understand how technology—and the world—changes within a two or three year time frame. And it would help us judge for ourselves how much effort the companies are investing into ensuring the quality of their products over the lifetime in which they’re used. Basically, it would give us, as customers, a richer track record for these companies, so that we can hold them accountable in a way that tends to go unnoticed today. These devices are maybe the most important pieces of technology that we own and every time we’re enticed to buy new ones we are promised world-changing features and performance. It strikes me that it’s reasonable to examine how well those claims hold up over time.
All that said, here are my thoughts on this new iPhone.
It’s a triumph, and I don’t think that I’m saying that just because the three-year old state of my iPhone 6 Plus has been so painful to bear lately. Overall, the iPhone X feels better conceived, designed, and executed than any previous model since the iPhone 5.
The standout feature is of course Face ID which I’ve found to be very slick and very well done. Unlike some other Apple innovations Touch ID, which had its problems early on, and Siri, which continues to be problematic, Face ID feels mature and fully baked. It’s not one of those new technologies that mostly works but sometimes struggles; it works virtually all the time, and it’s super fast (though check back in a year or two). On the rare occasions Face ID doesn’t work, it’s understandable, e.g., it doesn’t seem to recognize me when I’m in bed, with all the other lights out and with my glasses off. Anyway, I’m very, very impressed with Face ID.
Being able to set up the iPhone X by merely placing it near my old phone was pretty cool. I’d done it before with my Android devices, but I really appreciated the way Apple uses this to help me set up my Apple ID on my new device.
However, I did hit a roadbump in replacing my previous device with this new one: my iPhone 6 Plus was already updated to a iOS 11.0.3, so the backup was too new for the iPhone X, which was only on iOS 11.0.1. That resulted in a misleadingly alarming error message that suggested my backup might be corrupt. To get around this, I had to set up the X as a new device then upgrade to iOS 11.1. Not too difficult but time consuming and higher friction than I think Apple should be okay with. After consulting Twitter, I found that lots of people had that same problem.
Apple’s design team did a very nice job making tweaks to the UI to make iOS 11 more consonant with the unique details of the X’s hardware. One example is that on the iPhone X, iOS 11’s “cards” are rounder to be more harmonious with the rounded corners of the X’s screen. Lots of nice touches like this throughout.
I think I miss the Home button a little, but I do like the new Home affordance which shows up at the bottom of the screen as a little bar to encourage you to pull on it. However, it tends to look like a progress bar that’s just not showing any progress. I think this needs to be redesigned.
The notch is a nuisance for sure. It’s not elegant. It also forces some downstream usability problems. An example: when I connect to my office’s VPN, the indicator that I’m on the VPN is hidden in the top right area “behind” the wireless, Wi-Fi and battery indicators. After using the VPN for a while, I forgot that I was still connected because the indicator was not visible. That’s not good.
The physical size of this model is a major improvement over the Plus size of Apple’s previous models. I had really come to dislike how large and unwieldy my iPhone 6 Plus was, and I’m incredibly happy that this new model gives me basically the same screen real estate in a much more easily held physical frame.
I’ve done a reasonable amount of public speaking in my career but until this year’s Adobe MAX conference, where I appeared onstage during the keynote to present the official release of Adobe XD 1.0, I had never given a public software demo before. It’s true that a demo is not entirely dissimilar from giving a talk or lecture—many of the same presentation skills apply to both—but I discovered that in many ways demos are an entirely different kind of beast. So I thought I would write up some of my personal observations on the whole experience for those who are curious or who might one day have an opportunity to do something similar.
Like many folks working in tech I’ve demoed software countless times in the past but almost always in private settings—for colleagues, customers, investors, or to small groups of users. What makes an occasion like the Adobe MAX keynote unlike those prior experiences is its scale. There were 11,000 attendees in the Las Vegas auditorium where it was held, plus an overflow room and untold more viewers of the live stream.
A live audience of that size demands that even a short, ten-minute demo like the one I gave needs to become something that barely resembles the “real life” version of itself that it’s meant to represent. Most people use software alone in an almost improvised manner—whatever it takes to get the job done—and almost never talk about their actions aloud. A keynote demo upends that solitary activity and transforms it into a public performance that’s heavily rehearsed and narrated live, in real time.
In fact the central thing I learned about these kinds of demos is that they require exhaustive practice. In the two months leading up to the MAX conference, I flew out to San Francisco every single week, leaving New York on Monday and returning on Thursday or Friday, all to rehearse my ten minute demo again and again and again. In San Francisco, I’d spend nearly all day, every day practicing with the keynote’s production team and with the other keynote presenters. When I wasn’t officially in rehearsals, I was often running through my part on my own, usually in my hotel room but occasionally muttering it aloud walking around the Adobe offices. It was an all-consuming process.
You might be thinking to yourself, “That’s excessive,” which is an understandable reaction. But what I also learned is how absolutely necessary all that time was in developing the story of what I was going to be demoing. Even though there was relatively little debate about the aspects of Adobe XD that I would be presenting onstage, the actual narrative of those features really had to be developed through iterative, organic evolution. The version of the demo that I first began rehearsing back in late August was very different from the version that ended up onstage in mid-October, and it changed countless times in between.
Each of those many run-throughs was more than just a matter of learning or memorizing the content. The real value in doing it over and over, a dozen or two times a day, is that it allows you to make an endless number of incremental tweaks along the way—adding or subtracting a word or phrase here or there, trying out different sequences and emphases, learning how to communicate the message a tiny bit more clearly or succinctly.
There’s also the added complexity of the assets, or the sample design file, that forms the heart of the demo. Having a great looking project with which to show off an app’s capabilities makes all the difference. For various reasons, the sample file we started with had to be discarded, and so I spent a lot of time with one of our designers creating something entirely new, from scratch. He’s based in Germany which is five hours ahead of New York and eight hours ahead of San Francisco, which of course exacerbated the interminable jet lag that I was aready experiencing from all my back-and-forth travel. It was a very strange period of my life.
Choreography too is a part of this preparation. Not as in footwork (though I did have to practice the actual route I took as I walked up to the podium) but rather the choreography of what happens on screen. It was much trickier than I had anticipated to synchronize the words that I spoke with the motions of the cursor so that neither came too early nor too late. It got to the point where, by the end of the process, I was moving my mouse and clicking on buttons almost precisely along the same path and with the nearly exactly the same timing on each run through.
All that said, a lot of all that preparatory time was frankly devoted to just getting me used to the concept of demoing. How one figures out the tricky balance between telling the audience and showing the audience is kind of a personal journey. Demoing is neither clearly the former nor the latter, and getting comfortable with that ambiguity just takes time, practice and some measure of self reflection, too. You need to get right with whatever personal ambitions you have for how you want to come across on stage, and to reconcile that with whatever the goals of the demo are and with the feedback that comes from the rehearsal process. It was only through lots of trial and error and many periods of wondering whether I was even tempermentally suited for it at all that I was finally able to figure out “how to demo.” I’m sure there are people for whom this comes naturally but for me, I had to learn it the hard way.
Finally, don’t disregard the outside possibility of totally random chaos, too. You can prepare all you want, but there’s always the chance that something completely unexpected could entirely derail the actual demo once you go on stage. And, of course, that happened to me.
After all that rehearsal and practice, after internalizing every movement and detail of the setup of my screen, I walked on stage ready to do my thing and what happens? My trackpad died. It just wouldn’t work, stopping my demo cold in its tracks while a member of the stage crew had to rush another one out for me. You can see it in the video I posted above—not more than a minute into it, everything breaks down. People tell me that I managed to hold it together, but sometimes panic manages to disguise itself as composure.
It’s worth repeating that the inherent tension of a keynote demo is that it takes something you normally do alone—use software—and turns it into a highly public performance. You’re looking down at your keyboard and device most of the time but you’re also trying to augment every click and every word you utter to draw the live audience into it.
And even though every mouse movement is projected at huge scale for the audience to see, you still need to use carefully chosen commentary, expressive body language and judicious pacing to form a narrative, to give it all shape and purpose. The whole thing is both highly staged and strangely improvisational; it’s exacting and methodical yet it’s meant to come across as breezy and casual. It’s a monologue in that only the presenter is talking during those ten minutes, but it’s also a conversation with the audience in that anyone watching is going to be silently asking questions that should be answered in short order by what’s demonstrated on stage. It’s one of the weirdest things I’ve ever been through but it was also a lot of fun.
Helvetica, much as I adore it, has had more than its fair share of attention. That’s why I’m so happy to see this new book by designer, writer, and historian Douglas Thomas all about the typeface Futura which, it’s worth noting, predated Helvetica by three full decades—and it looks as beautiful and timely as ever.
Sporting the playfully provocative title “Never Use Futura,” Thomas’s book is a cultural biography of a typeface, starting with how Futura began life as a byproduct of Bauhaus ideals, tracing its evolution over many years and countless uses into a nearly invisible “go-to choice for corporate work, logos, motion pictures, and advertisements,” and touching on the current vogue of newer, Futura-derived geometric typefaces. You might expect this book, like most books about design, to be largely illustrative and light on the prose, but a look at the handsomely designed pages promises worthwhile reading.
Before I started working for Adobe, I never understood the scale of the company’s annual Adobe MAX user conference. This year’s installment, starting tomorrow and running through Friday in Las Vegas, will welcome 12,000 attendees, up twenty percent over last year—that’s a huge chunk of the creative community. Tomorrow morning I’ll be part of the day one keynote address, where we’ll be talking about lots of new stuff we’ve been working on—including new tools and features for designers. You can stream it live at max.adobe.com.