Photo Portraits of Techies

Techies

Former techie-turned-freelance photographer Helena Price is working on a brilliant project called “Techies,” in which she takes photographic portraits of those “who tend to be underrepresented in the greater tech narrative.”

This includes (but is not limited to) women, people of color, folks over 50, LGBT, working parents, disabled, etc.

The project has two main goals: to show the outside world a more comprehensive picture of people who work in tech, and to bring a bit of attention to folks in the industry whose stories have never been heard, considered or celebrated. We believe storytelling is a powerful tool for social impact and positive change

See the portraits at techiesproject.com.

+

A Brief History of Lens Flare

This video essay from Vox is a nicely done overview of how lens flare became such a prevalent visual style in contemporary film. As the video explains, it began in classic Hollywood cinema as a mistake; the presence of lens flare in film was originally something to be avoided and, when it showed up, corrected. Later, in the 1960s, as a new generation of filmmakers sought to reinvent the movies, lens flare became a signifier of authenticity, evidence that the new cinema was being made outside of sound stages and traditional production methods. Now it’s become an often abused stylistic affectation from directors subsisting on a dearth of original ideas. This transition also speaks to the way that technology has progressively become a more and more prominent element of art—lens flare is literally the intrusion of technology on artifice. Fascinating stuff.

+

Faking Food Photography

“Faking It” b Sally Suffield and Dan Matthews

Art director Sally Suffield and photographer Dan Matthews reveal some of the tricks that go into making food photography look preternaturally appealing. Each of the images in this collaboration between the two shows a beautifully composed food subject—a hunk of meat, a bowl of ice cream, a plate of cookies—next to some of the unexpected tools that went into styling it for the camera. Items include glue, tape, bubble wrap, mechanical oil, hair spray, paint and more.

See the full project at sandysuffield.com.

“Faking It” b Sally Suffield and Dan Matthews
“Faking It” b Sally Suffield and Dan Matthews
+

Interview with Tom Krcha, Adobe XD

Adobe XD User Interface

A few weeks ago Adobe released the first public preview of what the company is now calling Adobe XD, its major new UX/UI design and prototyping tool—you can download it here. This early version is missing several key features, and a long road to the full 1.0 launch lays ahead. Nevertheless, the preview is workable enough that professional designers can begin to see firsthand whether the app can deliver on the innovation that has been such an integral part of its promise. This milestone seemed like the right moment for me to interview Tom Krcha, a key member of the product team who has been with the project since the very beginning, before it was even called Project Comet. I asked him to reflect on how XD first came to life, what it took to get it to this release, and the long-term vision that is guiding him and the team.

Full disclosure: Since August of last year I have been an Adobe employee. As with all of my posts, this interview was not submitted to management for approval.

Khoi Vinh: Let’s start at the beginning. How did XD first come to be?

Tom Krcha: It started around mid-2014. I was working with the Behance team on some new apps, and we were prototyping new ideas all the time for them, using many different tools to do that. None of them seemed to be ideal. They lacked continuity; the ability to let you jump from an idea into a design and then mock up a quick prototype that could be easily shared—that seemed so obvious, but very distant from the reality back then. I wanted to speed up the way we iterate and communicate our ideas.

So I collected a bunch of the thoughts into a slide deck—a very quick mood board, nothing polished—and shared it with some people I knew around the company, just to see what others thought of it. Some of the ideas were around constraint-based adaptive layout, reusability, in-context editing, fast but precise vector UI and icon drawing and so on.

I quickly discovered that other folks at the company were thinking about similar things as well and in fact there was a lot of passion for this topic. We all met together and soon after assembled a small team of people, like a startup, with a mission to explore and re-think current UI/UX design workflow—if we could imagine anything and start with a blank canvas, with no limits. The whole process was really exciting.

I want to hear more about that process but first: by mid-2014 Sketch had already gained tons of traction, and the market for design tools had become quite robust. I have to imagine you and your colleagues were watching the market, right?

It’s true that there were a lot of design tools out there. But when we stepped back and took a broader view of the market, it seemed like there were a lot of opportunities to rethink traditional tooling. Demand for mobile apps had exploded. App design had matured and became more functional, and had moved towards flat design, which I think was an important break point. Designers started to think more about products and less about graphics. And motion and interaction started to play a much bigger role in app design. It was clear that we had entered a new era of designing products.

However we knew that building a new tool would take some time. It just doesn’t happen overnight. So I think everyone was less worried about what was happening at that moment and thinking further ahead. The thinking was to leapfrog the current generation of tools and jump to the future. To build an electric car of design tooling.

So do you believe most of the UX and UI design tools that debuted over the past two or three years are too focused on the “today” of the craft, and not enough on the tomorrow?

Yes. There is so much more that designers would love to have to simplify and speed up their process. Designing at the speed of thought is where we are all heading.

Getting to “speed of thought” tools requires negotiating the tradeoffs between ease of use and high fidelity—a WYSIWYG interface versus a code-intensive interface, is that right? How did you strike the right balance for XD?

That’s right; high fidelity and ease of use often go against each other. When we first started working on XD, we knew that we wanted to build a tool for any designer to pick up and start using right away, and to be able to use day to day. We decided to squarely focus on the design side and not on the development side of things, and also to stay platform agnostic. We talked to many designers—many advanced designers, but also emerging designers who haven’t necessarily adopted any given tools and who are just starting to look around, which is great, because they are less anchored to specific solutions. We learned that no matter what skill level they are, they want to start quickly and move fast. One of our core principles has been “comfort first,” meaning that the tool shouldn’t get in your way; it should be very straightforward. It should almost feel invisible, performance included.

What are some examples of those “comfort first” decisions you made?

There are many. The contextual property inspector shows you just what you need when you need it. We have recent files and UI kits available on the welcome screen for immediate use, so you don’t have to hunt for them. One of my favorites is ghosting. Whenever you have an object, let’s say a photo, that gets clipped by an artboard, we display the clipped part ghosted with opacity to help designers work easily with the full object context. This applies to the Boolean operations and soon to the masking as well. Another example is the distance decorations/guides where we combined snaplines and distance measurements together to minimize distractions. Some of these things are subtle, but when you experience them they feel so obvious.

If I’m a new user, what should I expect from the first time (or first few times) I use Adobe XD? Will that comfort be obvious to me right away, or is there a learning curve?

We tried to minimize the need for learning. The basics, such as drawing and layout, should feel familiar from first launch. There are definitely features that users need to learn about, but they will feel natural after using them one or two times. For instance you can drag an image from your computer directly into a shape in order to mask it—no need to actually tell the app to mask the image to the shape. It’s such a logical thing to do but I haven’t seen other tools do this.

We’re coming up with a set of heuristics like this that will make sense to everyone. One example is if you duplicate an object multiple times. We know that and we can show a contextual hint that says “[⌘R] Turn into Repeat Grid”, which might actually take the already duplicated objects and create a repeat grid for you quickly. Another place where it comes handy is the path editing, since there are many operations you can do on points with different key modifiers or gestures.

We’re still building the proper onboarding experience, and that’s a big part of the learning. We know that many users won’t read or watch long tutorials, but maybe there are more contextual methods of helping them learn about things that’s right within the tool. So you’ll see contextual hints that provide just enough guidance by showing you shortcuts for commands related to the currently selected object. Of course this will only be useful if it’s valuable enough and very subtle that it doesn’t interrupt the design process.

How much are you finding that the XD beta testers are struggling with the biases and preconceptions that they might bring with them from other tools?

Ha, yes. There is definitely one that I am constantly fighting with: the zoom tool. Our zoom has this Mac or iOS native feel. Pinch-to-zoom to a specific area on the trackpad or option and scroll on the mouse. It’s a buttery smooth zoom and I’m sure users will love it.

However, from the feedback we’ve learned that users are struggling to find the actual legacy zoom tool—the rectangle/marquee zoom. I definitely see a use case for that but pinch, in my opinion, has a much more natural feel. We’ll eventually support all the use cases, but it’s one of those things that I wish we could just skip.

In general though, how open to change are you finding the beta users?

I think lot of that goes back to onboarding actually. If the intention comes across clearly then it’s easy. However, we sometimes get a lot of feedback on certain things. That’s actually great. It’s exactly why we decided to start a dialog with users early on. First to really see if certain ideas are just crazy or just cool but are really edge cases, or if they will resonate well and speed up the workflows significantly. Sure, you still have to trust your expertise and gut, when making decisions, but having usage data and qualitative research helps a lot to settle on a decision. Either way, innovation is hard, especially when you are fighting expectations that are often not clearly articulated—because “it used to always be like that.” I think we can do so much better in areas such as symbols, styles and layers and not just take what’s out there.

It’s interesting that you’ve built the layout tool simultaneously with the prototyping tool. In many ways they’re very different, but has the parallel development brought you insights you wouldn’t have had otherwise?

It led us to think about the round-trip between design and prototyping as the backbone of modern experience designer workflows. It opened many technological questions about rendering, interactivity and animation on all the platforms for sure. At the same time many of our team members have worked on animation tools and gaming engines previously and that past experience is very helpful as we explore the future features that make both workflows feel even more connected.

That raises another question: how much is XD intended to be used end-to-end, and how much is it meant to complement existing apps? Could I take a UI layout from XD and put it in InVision, or could I take a UI layout from Illustrator and put it into XD? How did you formulate your philosophy on that?

XD is flexible and you can really do all that. Import vector and bitmap assets into the app, export them out and reuse them in another tool. We’re working on a tighter integration with Photoshop and Illustrator and we will provide extensibility layer for any tool to integrate.

We see XD as fitting in the center of the workflow. You can start in the tool or bring assets in and stay until perhaps you need advanced prototyping or custom behaviors. You can think of XD as a communication tool, to get a design vision across. We plan to provide as much as needed to cover the major areas of design and prototyping, while keeping the tool simple and fast to use.

It’s a challenge, since providing some advanced features would require us to take steps that would make XD a lot more complex and that would be in conflict with our principles. Examples might be adding a full timeline, code editor or photo retouching features. We think there’s a balance, where Adobe XD covers just the right amount of UX design workflow within itself, then enables other tools like Photoshop or Framer Studio and similar prototyping tools to extend the workflows where needed on both sides.

I’m glad to hear you say that, because I tend to believe that designers will be using a pretty eclectic set of tools for the foreseeable future—that we’ll always need to mix and match, to some extent. Are there opportunities for interoperability between your product and the wide variety of other tools out there, both from Adobe and from others? Is there going to be a standard prototyping file format, the way we have SVG or even to some extent PSD?

There are multiple options to enable this interopability. File format might help in certain cases, but really the extensibility APIs could return exactly what the tool needs, such as a rendition of a specific element or a structure. This is something we are currently investigating and more details will follow. We want to enable these workflows to make sure designers can fit XD in the best place in their workflow and take their designs to other tools if needed for something very specific.

Let’s talk about bringing XD to life then. What did you use to prototype what you were building?

We built a complete functional prototype with HTML and JavaScript and wrapped it inside a native app chrome so it looked just like a real design tool. It had full drawing capabilities and all sorts of new, ambitious features that we were testing, many of which aren’t in the shipping product yet. Today we can already actually prototype certain things for XD in XD, which feels really satisfying, but of course we also still prototype in actual code for more complex interactions.

And where did you start building? What was first?

We first looked at areas where we could significantly cut the time to get something done. Repeat Grid and masking by dropping an image into a shape are two examples of that. But also by de-constructing past patterns and seeing if we can put them back together in a better, more approachable way. Sometimes we landed close to where we began which is fine—that’s a good validation that changing essentials can actually result in the opposite effect and slow users down. Other times we find a new or an improved way.

Research wise, what kind of outreach did you do with working designers?

Initially we invited a bunch of designers from all skill levels and backgrounds into Adobe and had them play with the prototype. It was a great way to build empathy, observe, validate and narrow down the scope of our future work and really fail fast and move on. When you’re prototyping, the world seems limitless. It’s when you start working on the real thing, you realize how crisp you have to be. We learned that there is a big difference between “cool” and “useful on daily basis.”

These days we have a few Slack user groups including one dedicated to a customer advisory board, where we discuss new features in real time and try to shape ideas with a few dozen select real world users. But anyone can actually suggest features and vote on them or file a bug using our UserVoice. Our internal design teams are also providing a constant stream of feedback.

How did that turn into a real project and a dedicated product team?

It was through a series of iterations as the project evolved. The prototype helped to build excitement and early support to move on in combination with market research, industry trends and feedback from the users. But as soon as we started thinking about it being a real product we put the prototype aside and did a bunch of research to hit the performance and quality bar that we knew was going to be critical for this tool. This led us to a new native codebase built from scratch. The breakpoint came when we felt the canvas performance play well together with the new UI. All those things got us support from within the company to take it from zero to one.

How did you get executive backing for the project? UX and UI design is a niche that Adobe has in the past been somewhat lukewarm on.

As it became clear that there was a really important opportunity here, we started to advocate for funding it as a real project, not just an experiment. So it was really a bottom-up effort, like a startup trying to get VC funding.

I don’t think there was one big green light though. As we talked to executives, they were interested but kept challenging us to keep proving that it all makes sense, to keep iterating and improving the vision.

We were also looking at the fact that there was a sudden uptick in prototyping solutions that were starting to enter the market. In fact, even with all these new tools coming out, we found that a lot of designers were using PowerPoint and Keynote—tools that weren’t even built for prototyping—to express their UX designs. That suggested to us that there was a great unmet need in the prototyping space—but we didn’t see anyone trying to put prototyping together with visual design.

Okay so the preview is out. What do you expect to happen between now and the official release of 1.0?

We’re going to be busy. While the horizontal workflow is in (design, prototype, collaboration), now we’ll go deeper in all those verticals. We’ll enhance the design features for higher fidelity, add new effects including background blur, enable authoring scrollable content and microinteractions, provide iOS and Android companion apps for real time previewing, help designers manage bigger documents as their designs scale with symbols, styles and Creative Cloud libraries, add extensibility APIs for custom plugins and integration with other tools, and build a Windows 10 version. As we go we’ll fine-tune the product by responding to the feedback we get. We will also keep working on visions for the future releases.

Looking further ahead, what is your vision for Adobe XD in, say, three years, or five, or ten?

Our mission is to build a tool designers love. Going forward, we think that your designs should be everywhere with you, so you can review them at any time and share with other people. The internet also enables new ways of live collaboration that were not possible before. We think both of these areas should be part of every designer’s workflow and we are planning to build a system of connected apps for desktop, mobile and cloud that enable collaboration between designers, developers and stakeholders wherever they are.

There is also a range of trends that we see coming into design tools, especially around adaptive layout to help with scaling designs onto multiple resolutions, or quickly populating a layout with sample data and designing with real data (some of which we showed last year). I also think there is still a space to improve the designer/developer workflow, although this is challenging for numerous reasons, but perhaps semantics in design could help.

Maybe in the more distant future, designers will just provide requirements and inputs on what they want and a simple artificial intelligence (or a smart algorithm) will assist in remixing variations of the designs based on the current design trends and other parameters. You can imagine that it could also scale designs to other form factors. I can foresee algorithms that could help with brainstorming, moodboarding and gathering inspiration quickly, while applying chosen patterns to a design that’s being worked on. I think it all comes back to designing at the speed of thought.

+

Watch First-run Movies at Home and/or Destroy Movie Theaters

This article at Slate is as good a summary of the controversy currently brewing over The Screening Room, a plan to disrupt the movie distribution system being led by veteran entrepreneur Sean Parker. After buying a US$150 set top box, consumers can rent first-run, major movies to watch in their homes for forty-eight hours. If you remember a time when renting DVDs cost only a few dollars, you might be shocked at the US$50 rental fee for these releases, but in theory these are “blockbuster” grade movies—imagine being able to experience the critically drubbed “Batman v. Superman” in the comfort of your own home on opening night! The pricing starts to feel even more reasonable once you factor in the hassle of getting to the theater, the inflationary concessions menu, the cost of a babysitter (for some of us), and the ability to invite your friends over to split the fee with you.

Parker has lined up lots of big names behind this plan, including Martin Scorsese, J.J. Abrams, Steven Spielberg, Ron Howard, Brian Grazer, and Peter Jackson, but there are big names coming out against it too, like James Cameron and, unsurprisingly, theater owners nationwide. My own feelings about this are quite conflicted. It’s hard for my wife and me to make it out to the movies these days, and being able to stream newly released movies on demand sounds like it would be a tremendous convenience. On the other hand, going to the movies is a wonderful experience; even if a given cineplex is nothing to write home about, the exclusivity of a new movie gets us out of the house and into the world alongside other human beings, which is something to be appreciated. As much as I want to be able to watch new releases at home, The Screening Room would almost certainly decimate the theater business, making a world in which there is no theater option a real possibility. I’m not sure I want that to happen, though in the long run, it may be an inevitability.

+

Batman vs. Superman

I think this illustration by Diego Patiño for The New Yorker’s review of Zack Snyder’s new “Batman vs. Superman: Dawn of Justice” is terrific:

Batman vs. Superman” by Diego Patino

The review itself, written by the notoriously dismissive staff reviewer Anthony Lane, is actually not as savagely critical as most (though it hardly recommends it). As of today, the movie is scoring an abysmal 29% on Rotten Tomatoes. (It did gangbusters at the box office though.) I was initially somewhat intrigued by it, but with these reviews, I really wonder if I have the stamina.

It’s shocking to me how incompetent Warner Bros. has been at handling the Superman franchise for the past several decades, and now they’ve dragged the Batman franchise into the same mess, as if misery would benefit from company. This is the golden age of comic book movies, and the company has at least three of the most iconic comic book characters in history to work with… and the best they can do is Zack Snyder. What a colossal waste not just of capital and human resources, but also of the collective dreams of millions of fans.

+

Walker Art Center Interviews Google Design

Cover for Google SPAN Reader

Walker art director Emmet Byrne interviews Rob Giampietro and Amber Bravo, part of the team responsible for Material Design. Their comments on that design language are worth reading:

Material Design is an open-source product and we treat it as such with regular updates and improvements that we share widely. On our team, designers and engineers work very closely together to build, and, perhaps even more crucially, maintain the system and services we develop. That’s a hallmark of our work at Google Design—the fact that we’re lead by design and engineering in equal measure. We’ve created a unique platform for sharing our work and the work of other design teams across Google, but it’s always geared toward the perspective of a team of people who are excited to polish and push the boundaries of design and engineering. We mentioned our mission earlier: to support designers and developers both internally and externally to Google. So part of our editorial and educational imperative is to share Google’s process and thinking with the design world around important topics like design tools or identity systems, and, just as significantly, we want to listen, learn, and respond to what the design world is talking and thinking about and bring the best of those ideas back into the company to power it and make all of our work better. Google is a technology organization, but, increasingly, and especially with the formation of Google Design, it understands itself to be a cultural organization as well.

This conforms to what I find interesting about Material Design: I’m less impressed by Google’s construction of a comprehensive aesthetic (although it is quite attractive) than by Google’s development of an expansive design culture. Material maintains a very tricky balance between establishing dicta and engendering participation in a conversation with its users—designers and developers—about design. It’s not always a successful balance (by necessity it’s far more prescriptive than iOS’s design language, and the results are more uniform and less innovative) but it’s still remarkable for having created an ecosystem of independent practitioners who are invested in growing and evolving the system.

Read the full interview, which goes into depth on the remarkable book of design-related essays that the team produced for its recent SPAN design conferences, at blogs.walkerart.org.

+

Lingo by Noun Project

Lingo by Noun Project

The amazing, community-powered icon resource team at The Noun Project has released a new piece of software aimed at helping designers organize their visual assets. It’s called Lingo and it makes the case that traditional file- and folder-based hierarchy is a disservice to image assets. Indeed, the marketing draws a line in the sand with its tagline:

Files hide in folders. Visuals live in Lingo.

Designers can drag and drop their assets into Lingo’s thumbnail browser interface; the assets are synced to Lingo’s own cloud service and are then accessible across computers.

That Lingo is both expressly made for designers and smartly crafted are big wins, but it’s still only a first step towards making visual asset management easier. It’s more modern and more thoughtful, but still not all that dissimilar from the various other image asset management solutions that have been with us since computers and visual artists first started hanging out together. My main complaint is that Lingo still requires rich keywords for icons to be readable in its search mechanism. You can search for “pencil,” for instance, and get any asset that’s been tagged with that keyword, but you can’t find an image that happens to have a pencil in the background. Neither can you search for black and white assets, or vector assets, unless someone has tagged them accordingly.

This is a bit of an unfair line of criticism because Lingo is really a beautiful piece of work and sure to prove handy for lots of people. (Its ability to let you drag assets into popular design tools is particularly nice.) It’s more accurate to say that image management as a whole is still fairly primitive; the fact that we’re still relying on keywords to look for visual items and not on machine learning to do that job shows a huge gap between what’s possible in current technology and what has trickled down into the design tools space.

+