is a blog about design, technology and culture written by Khoi Vinh, and has been more or less continuously published since December 2000 in New York City. Khoi is currently Principal Designer at Adobe. Previously, Khoi was co-founder and CEO of Mixel (acquired in 2013), Design Director of The New York Times Online, and co-founder of the design studio Behavior, LLC. He is the author of “How They Got There: Interviews with Digital Designers About Their Careers”and “Ordering Disorder: Grid Principles for Web Design,” and was named one of Fast Company’s “fifty most influential designers in America.” Khoi lives in Crown Heights, Brooklyn with his wife and three children.
A few weeks ago Adobe released the first public preview of what the company is now calling Adobe XD, its major new UX/UI design and prototyping tool—you can download it here. This early version is missing several key features, and a long road to the full 1.0 launch lays ahead. Nevertheless, the preview is workable enough that professional designers can begin to see firsthand whether the app can deliver on the innovation that has been such an integral part of its promise. This milestone seemed like the right moment for me to interview Tom Krcha, a key member of the product team who has been with the project since the very beginning, before it was even called Project Comet. I asked him to reflect on how XD first came to life, what it took to get it to this release, and the long-term vision that is guiding him and the team.
Full disclosure: Since August of last year I have been an Adobe employee. As with all of my posts, this interview was not submitted to management for approval.
Khoi Vinh: Let’s start at the beginning. How did XD first come to be?
Tom Krcha: It started around mid-2014. I was working with the Behance team on some new apps, and we were prototyping new ideas all the time for them, using many different tools to do that. None of them seemed to be ideal. They lacked continuity; the ability to let you jump from an idea into a design and then mock up a quick prototype that could be easily shared—that seemed so obvious, but very distant from the reality back then. I wanted to speed up the way we iterate and communicate our ideas.
So I collected a bunch of the thoughts into a slide deck—a very quick mood board, nothing polished—and shared it with some people I knew around the company, just to see what others thought of it. Some of the ideas were around constraint-based adaptive layout, reusability, in-context editing, fast but precise vector UI and icon drawing and so on.
I quickly discovered that other folks at the company were thinking about similar things as well and in fact there was a lot of passion for this topic. We all met together and soon after assembled a small team of people, like a startup, with a mission to explore and re-think current UI/UX design workflow—if we could imagine anything and start with a blank canvas, with no limits. The whole process was really exciting.
I want to hear more about that process but first: by mid-2014 Sketch had already gained tons of traction, and the market for design tools had become quite robust. I have to imagine you and your colleagues were watching the market, right?
It’s true that there were a lot of design tools out there. But when we stepped back and took a broader view of the market, it seemed like there were a lot of opportunities to rethink traditional tooling. Demand for mobile apps had exploded. App design had matured and became more functional, and had moved towards flat design, which I think was an important break point. Designers started to think more about products and less about graphics. And motion and interaction started to play a much bigger role in app design. It was clear that we had entered a new era of designing products.
However we knew that building a new tool would take some time. It just doesn’t happen overnight. So I think everyone was less worried about what was happening at that moment and thinking further ahead. The thinking was to leapfrog the current generation of tools and jump to the future. To build an electric car of design tooling.
So do you believe most of the UX and UI design tools that debuted over the past two or three years are too focused on the “today” of the craft, and not enough on the tomorrow?
Yes. There is so much more that designers would love to have to simplify and speed up their process. Designing at the speed of thought is where we are all heading.
Getting to “speed of thought” tools requires negotiating the tradeoffs between ease of use and high fidelity—a WYSIWYG interface versus a code-intensive interface, is that right? How did you strike the right balance for XD?
That’s right; high fidelity and ease of use often go against each other. When we first started working on XD, we knew that we wanted to build a tool for any designer to pick up and start using right away, and to be able to use day to day. We decided to squarely focus on the design side and not on the development side of things, and also to stay platform agnostic. We talked to many designers—many advanced designers, but also emerging designers who haven’t necessarily adopted any given tools and who are just starting to look around, which is great, because they are less anchored to specific solutions. We learned that no matter what skill level they are, they want to start quickly and move fast. One of our core principles has been “comfort first,” meaning that the tool shouldn’t get in your way; it should be very straightforward. It should almost feel invisible, performance included.
What are some examples of those “comfort first” decisions you made?
There are many. The contextual property inspector shows you just what you need when you need it. We have recent files and UI kits available on the welcome screen for immediate use, so you don’t have to hunt for them. One of my favorites is ghosting. Whenever you have an object, let’s say a photo, that gets clipped by an artboard, we display the clipped part ghosted with opacity to help designers work easily with the full object context. This applies to the Boolean operations and soon to the masking as well. Another example is the distance decorations/guides where we combined snaplines and distance measurements together to minimize distractions. Some of these things are subtle, but when you experience them they feel so obvious.
If I’m a new user, what should I expect from the first time (or first few times) I use Adobe XD? Will that comfort be obvious to me right away, or is there a learning curve?
We tried to minimize the need for learning. The basics, such as drawing and layout, should feel familiar from first launch. There are definitely features that users need to learn about, but they will feel natural after using them one or two times. For instance you can drag an image from your computer directly into a shape in order to mask it—no need to actually tell the app to mask the image to the shape. It’s such a logical thing to do but I haven’t seen other tools do this.
We’re coming up with a set of heuristics like this that will make sense to everyone. One example is if you duplicate an object multiple times. We know that and we can show a contextual hint that says “[⌘R] Turn into Repeat Grid”, which might actually take the already duplicated objects and create a repeat grid for you quickly. Another place where it comes handy is the path editing, since there are many operations you can do on points with different key modifiers or gestures.
We’re still building the proper onboarding experience, and that’s a big part of the learning. We know that many users won’t read or watch long tutorials, but maybe there are more contextual methods of helping them learn about things that’s right within the tool. So you’ll see contextual hints that provide just enough guidance by showing you shortcuts for commands related to the currently selected object. Of course this will only be useful if it’s valuable enough and very subtle that it doesn’t interrupt the design process.
How much are you finding that the XD beta testers are struggling with the biases and preconceptions that they might bring with them from other tools?
Ha, yes. There is definitely one that I am constantly fighting with: the zoom tool. Our zoom has this Mac or iOS native feel. Pinch-to-zoom to a specific area on the trackpad or option and scroll on the mouse. It’s a buttery smooth zoom and I’m sure users will love it.
However, from the feedback we’ve learned that users are struggling to find the actual legacy zoom tool—the rectangle/marquee zoom. I definitely see a use case for that but pinch, in my opinion, has a much more natural feel. We’ll eventually support all the use cases, but it’s one of those things that I wish we could just skip.
In general though, how open to change are you finding the beta users?
I think lot of that goes back to onboarding actually. If the intention comes across clearly then it’s easy. However, we sometimes get a lot of feedback on certain things. That’s actually great. It’s exactly why we decided to start a dialog with users early on. First to really see if certain ideas are just crazy or just cool but are really edge cases, or if they will resonate well and speed up the workflows significantly. Sure, you still have to trust your expertise and gut, when making decisions, but having usage data and qualitative research helps a lot to settle on a decision. Either way, innovation is hard, especially when you are fighting expectations that are often not clearly articulated—because “it used to always be like that.” I think we can do so much better in areas such as symbols, styles and layers and not just take what’s out there.
It’s interesting that you’ve built the layout tool simultaneously with the prototyping tool. In many ways they’re very different, but has the parallel development brought you insights you wouldn’t have had otherwise?
It led us to think about the round-trip between design and prototyping as the backbone of modern experience designer workflows. It opened many technological questions about rendering, interactivity and animation on all the platforms for sure. At the same time many of our team members have worked on animation tools and gaming engines previously and that past experience is very helpful as we explore the future features that make both workflows feel even more connected.
That raises another question: how much is XD intended to be used end-to-end, and how much is it meant to complement existing apps? Could I take a UI layout from XD and put it in InVision, or could I take a UI layout from Illustrator and put it into XD? How did you formulate your philosophy on that?
XD is flexible and you can really do all that. Import vector and bitmap assets into the app, export them out and reuse them in another tool. We’re working on a tighter integration with Photoshop and Illustrator and we will provide extensibility layer for any tool to integrate.
We see XD as fitting in the center of the workflow. You can start in the tool or bring assets in and stay until perhaps you need advanced prototyping or custom behaviors. You can think of XD as a communication tool, to get a design vision across. We plan to provide as much as needed to cover the major areas of design and prototyping, while keeping the tool simple and fast to use.
It’s a challenge, since providing some advanced features would require us to take steps that would make XD a lot more complex and that would be in conflict with our principles. Examples might be adding a full timeline, code editor or photo retouching features. We think there’s a balance, where Adobe XD covers just the right amount of UX design workflow within itself, then enables other tools like Photoshop or Framer Studio and similar prototyping tools to extend the workflows where needed on both sides.
I’m glad to hear you say that, because I tend to believe that designers will be using a pretty eclectic set of tools for the foreseeable future—that we’ll always need to mix and match, to some extent. Are there opportunities for interoperability between your product and the wide variety of other tools out there, both from Adobe and from others? Is there going to be a standard prototyping file format, the way we have SVG or even to some extent PSD?
There are multiple options to enable this interopability. File format might help in certain cases, but really the extensibility APIs could return exactly what the tool needs, such as a rendition of a specific element or a structure. This is something we are currently investigating and more details will follow. We want to enable these workflows to make sure designers can fit XD in the best place in their workflow and take their designs to other tools if needed for something very specific.
Let’s talk about bringing XD to life then. What did you use to prototype what you were building?
We built a complete functional prototype with HTML and JavaScript and wrapped it inside a native app chrome so it looked just like a real design tool. It had full drawing capabilities and all sorts of new, ambitious features that we were testing, many of which aren’t in the shipping product yet. Today we can already actually prototype certain things for XD in XD, which feels really satisfying, but of course we also still prototype in actual code for more complex interactions.
And where did you start building? What was first?
We first looked at areas where we could significantly cut the time to get something done. Repeat Grid and masking by dropping an image into a shape are two examples of that. But also by de-constructing past patterns and seeing if we can put them back together in a better, more approachable way. Sometimes we landed close to where we began which is fine—that’s a good validation that changing essentials can actually result in the opposite effect and slow users down. Other times we find a new or an improved way.
Research wise, what kind of outreach did you do with working designers?
Initially we invited a bunch of designers from all skill levels and backgrounds into Adobe and had them play with the prototype. It was a great way to build empathy, observe, validate and narrow down the scope of our future work and really fail fast and move on. When you’re prototyping, the world seems limitless. It’s when you start working on the real thing, you realize how crisp you have to be. We learned that there is a big difference between “cool” and “useful on daily basis.”
These days we have a few Slack user groups including one dedicated to a customer advisory board, where we discuss new features in real time and try to shape ideas with a few dozen select real world users. But anyone can actually suggest features and vote on them or file a bug using our UserVoice. Our internal design teams are also providing a constant stream of feedback.
How did that turn into a real project and a dedicated product team?
It was through a series of iterations as the project evolved. The prototype helped to build excitement and early support to move on in combination with market research, industry trends and feedback from the users. But as soon as we started thinking about it being a real product we put the prototype aside and did a bunch of research to hit the performance and quality bar that we knew was going to be critical for this tool. This led us to a new native codebase built from scratch. The breakpoint came when we felt the canvas performance play well together with the new UI. All those things got us support from within the company to take it from zero to one.
How did you get executive backing for the project? UX and UI design is a niche that Adobe has in the past been somewhat lukewarm on.
As it became clear that there was a really important opportunity here, we started to advocate for funding it as a real project, not just an experiment. So it was really a bottom-up effort, like a startup trying to get VC funding.
I don’t think there was one big green light though. As we talked to executives, they were interested but kept challenging us to keep proving that it all makes sense, to keep iterating and improving the vision.
We were also looking at the fact that there was a sudden uptick in prototyping solutions that were starting to enter the market. In fact, even with all these new tools coming out, we found that a lot of designers were using PowerPoint and Keynote—tools that weren’t even built for prototyping—to express their UX designs. That suggested to us that there was a great unmet need in the prototyping space—but we didn’t see anyone trying to put prototyping together with visual design.
Okay so the preview is out. What do you expect to happen between now and the official release of 1.0?
We’re going to be busy. While the horizontal workflow is in (design, prototype, collaboration), now we’ll go deeper in all those verticals. We’ll enhance the design features for higher fidelity, add new effects including background blur, enable authoring scrollable content and microinteractions, provide iOS and Android companion apps for real time previewing, help designers manage bigger documents as their designs scale with symbols, styles and Creative Cloud libraries, add extensibility APIs for custom plugins and integration with other tools, and build a Windows 10 version. As we go we’ll fine-tune the product by responding to the feedback we get. We will also keep working on visions for the future releases.
Looking further ahead, what is your vision for Adobe XD in, say, three years, or five, or ten?
Our mission is to build a tool designers love. Going forward, we think that your designs should be everywhere with you, so you can review them at any time and share with other people. The internet also enables new ways of live collaboration that were not possible before. We think both of these areas should be part of every designer’s workflow and we are planning to build a system of connected apps for desktop, mobile and cloud that enable collaboration between designers, developers and stakeholders wherever they are.
There is also a range of trends that we see coming into design tools, especially around adaptive layout to help with scaling designs onto multiple resolutions, or quickly populating a layout with sample data and designing with real data (some of which we showed last year). I also think there is still a space to improve the designer/developer workflow, although this is challenging for numerous reasons, but perhaps semantics in design could help.
Maybe in the more distant future, designers will just provide requirements and inputs on what they want and a simple artificial intelligence (or a smart algorithm) will assist in remixing variations of the designs based on the current design trends and other parameters. You can imagine that it could also scale designs to other form factors. I can foresee algorithms that could help with brainstorming, moodboarding and gathering inspiration quickly, while applying chosen patterns to a design that’s being worked on. I think it all comes back to designing at the speed of thought.