September 18, 2015
Last week, Apple announced the iPad Pro, its most ambitious move yet to position the iPad as more than a casual-use (and niche-use) device. The hardware is unquestionably up to the task – Apple boasts that it’s faster than 80% of laptops sold in the past year, with a screen packing more pixels than Apple’s own Retina Display MacBooks. Yet even with all this, the iPad Pro will be held back not by its hardware, but by its operating system. A bigger form factor may signal professional ambitions, but what’s truly needed for the iPad to rival desktop environments is not an iPad Pro, but an iOS Pro.
The iPad as it stands fits the needs of some professionals. Viewed in a jobs-to-be-done framework, there are some jobs for which many will find it the best hire, but these tend to be all of a kind: They are performed by a single application with a narrow scope, such as a note-taking app to brainstorm or a marine navigation app to pilot a boat. The iPad pro is being pitched to professionals whose needs center around using more applications, with broader scopes, with deeper networks of relational structure (that is to say, more files used together), none of which iOS was originally designed to work with. A big screen, a keyboard, even a high-fidelity pointing device do nothing to address this.
In this series of posts, I want to examine some of the shortcomings iOS has in offering a versatile, professional, creative environment, while proposing some potential solutions to them. The first is multitasking.
iOS 9, in anticipation of the iPad Pro, adds split-screen multitasking. It’s clear a lot of thought went into this particular approach, and it’s naturally much cleaner than wrangling a mess of floating windows. But it has one major drawback that stems from the very core of the iOS app-centric model.
It’s actually a little absurd: In the on-stage demos, Microsoft’s Kirk Konigsbauer demonstrates the ease with which you can load up a Word document and a PowerPoint deck side-by-side. What he doesn’t show is that the moment you want to load a second Word document into that other pane, something possible since Microsoft’s Multiple Document Interface in the late 1980s, iOS will stop you: There’s simply no way to do it, because the app-centric paradigm of iOS has no room for document-level UI separation.
Solution: Share the metaphor
It’s not too hard to imagine a solution that can leverage the app-centric paradigm of iOS into something supporting multiple documents from the same app. Apps and documents both share the metaphor of the window on the desktop, so why not let them share the iOS pane model?
In an application that supports it, the slide-over menu gains a new option at the bottom for the current app. Tapping that instantiates another view of the app, defaulting to the document management or “open” view. The underlying iOS process model would likely need an overhaul for this to become a reality, but it’s a necessity.
Efficient management of multiple applications and multiple documents is a critical part of many professional workflows, but it’s far from the only part. The next installment will look at the concepts of files and projects in our hypothetical iOS Pro.
March 5, 2015
Five years ago, Apple’s last new product category, the iPad, was unveiled. In just a few days, the company’s next new product category, the Apple Watch, will make its debut. So I’m going to do something I haven’t here since 2010: I’m going to make some informed (if almost certainly wrong) speculation about one of the biggest remaining unknowns: In this case, pricing.
Pricing is an often-overlooked part of design, something we tend to think of as purely the domain of business, to be taken care of by MBAs rather than product designers. But pricing is just as interwoven with design constraints and user psychology as any other design choice, and I’m nearly certain Jonathan Ive’s team played a part in the process.
Mind the gap
The one thing we do know about the lineup’s pricing is its floor: Apple Watch Sport is $349. We have wild speculation for the ceiling: five, ten, even twenty thousand dollar price points have been proposed for the top-end, solid-gold Apple Watch Edition. That’s uncharted territory, but what’s most interesting in terms of price design is the mid-range, un-suffixed Apple Watch.
Consensus seems to be that the steel-clad Apple Watch will start at nearly twice the price of the aluminum Sport. While this is hardly unreasonable to expect for a semi-luxury item, it would leave a huge hole in Apple’s usual pricing structure: Ever since the introduction of the iPhone, Apple has priced all its mobile devices in $100 increments, and it’s become a proven strategy for the company. Edition notwithstanding, I can’t really see them abandoning it for the Watch.
Finding the right variable
Apple’s upsells for mobile products have included storage space, screen size, and in the case of previous years’ models, hardware revision, all priced with the same $100 increments. But the Apple Watch has none of these easily-tiered variables.
As for screen size, it’s highly unlikely the four-millimeter step from the 38mm to the 42mm version will be enough to carry a $100 premium (for aluminum and steel, at least); even $50 seems like a stretch. All Apple needs to cover the cost of are the additional metal (a few cents to a dollar?) and a few more pixels of OLED ($12?), for which a $30 bump seems just about right to maintain the product’s margin. And the $30 increment is a familiar one, used with iPads for which the $100 increment alone wasn’t sufficient to maintain the margin on cellular radios.
So that’s a $30 increment, but still no $100. How about color? Even though John Gruber has suggested a possible return of the 2006 MacBook’s “black tax,” Apple has been selling black versions of its hardware for the same price as all other colors since the iPhone 3G in 2008.
The magic of the repeated $100 increment is the negotiability it engenders in the buyer’s mind. Whatever you’re considering buying, the next step up is never more than $100 away – it’s right there, tempting you. It’s for this reason that Apple more or less has to have something at a $449 price point, and with no premium bands in sight for Sport, only a modest premium likely for screen size, and equal color pricing, that leaves the entry-level steel version to cover the gap.
From there, however, I think Apple has plenty of flexibility to maintain its $100 increment strategy with premium bands, each tier falling into the groups outlined by Gruber in his piece.
The market versus the margin
There’s no doubt that premium watch buyers are used to spending more than this for a quality steel watch. Perhaps Apple could charge higher prices to signal competition with quality mechanical watches, but I think they will lowball simply because they can. A steel-housed smartwatch doesn’t cost twice as much to build as an aluminum-housed one, and I don’t think Apple will price it as such.
I could be entirely wrong about this, but historically, Apple doesn’t really seem to care about charging a lot as much as they care about charging enough to maintain good margins. Remember Steve Jobs’ deft anchoring of the iPad at $999 before announcing it would be half that? However the Apple Watch ends up being priced, I think it’s going to be based more on the margins Apple can make than on how high the market might be willing to go.
September 10, 2014
The newly-announced Apple Watch is a curious fusion. It blends ingredients of both an old and a new version of the largest company in the US: On one hand, it’s a throwback to the days when Apple experimented with products designed not to change your life, but to improve a very specific facet of it. On the other, it is loaded with subtle signals of a substantial change in the way Apple approaches the market. It is simultaneously an extension of the iPhone in the same way the iPod was an extension of the Macintosh, while charting a fascinating new direction for the post-Beats-acquisition Apple.
The iPod was, in the big picture, not a game-changer. It was not a product that would usher in the digital age of several creative industries like the Mac did, or kickstart a variety of entire markets and even subcultures as did the iPhone. It was not a product that would enable you to do amazing new things like instantly translating words on a sign, but one that would simply take something you already do, listening to music, and make it slick, seamless, and more fun. It was a device that nobody needed, but one that millions grew to want.
In this way, the Apple Watch seems to closely follow the iPod playbook. Despite what many industry watchers have breathlessly proclaimed about the dawn of the wearable era, one would be hard-pressed to call the Watch, or indeed any smartwatch a socio-technological shift on the level of the smartphone or the tablet. At the end of the day, all of them – Pebble, Android Wear, and now even the Apple Watch, are conceived of purely as extensions of the smartphone. By and large, they enable new modes of doing the same things you could do with a smartphone, where the smartphone enabled you to do new things altogether.
And that’s not a problem, at least for Apple. Because this time, they’re trying what appears to be an entirely new strategy to out-iPod the iPod.
The category path
“New product categories” were the enigmatic, ear-perking words with which Tim Cook teased the Apple Watch (among other things) earlier in the year. But beneath the glossy presentation of a new Apple product and category, between the lines of the messaging both verbal and visual, is where a new, uncharted Apple becomes apparent.
Each “new category” Apple has entered over the past thirteen years has seen the debut of a product with a singular cachet. The first iPods, at $400 (almost $540 in 2014), grew to become a pop-culture icon of consumer tech luxury, until the $250 iPod mini, and ultimately the $99 Shuffle, made them accessible to the mass market. The iPhone, introduced at the $400 price point on contract, was most visibly adopted by tastemakers for whom the Motorola Razr had become culturally and economically diluted, only to eventually have its past models offered free on contract. Even in a world with a 5.5″, gold iPhone 6 Plus, the mere presence of the cracked-glass iPhone 4 toted by a broke college student dilutes the iPhone’s cachet from its cultural height.
Built to last
It seems that Apple is attempting to build a new business that is immune to this dilution. An unabashed luxury (though as always, attainable luxury) sub-brand. If its desktop computer and mobile device segments sell you beautifully-designed options for things you need, Luxury Apple sells you the things you don’t need, but want.
It’s written all over the details. The new typographic standards, abandoning friendly Myriad for the more sophisticated and transatlantic DIN, all-caps at that. The new industrial design language that dispenses with the practical surfaces and materials of Apple’s computing hardware and adopts the language of jewelers, with not-entirely-necessary engraved detail text, mirror-finish metals, and extensive, first-party customization with the sorts of materials you might find composing products at Angela Ahrendts’ old employer. Make no mistake, Apple is speaking a new language with the Watch.
In this light, the Apple Watch is about more than smartwatches. It isn’t even particularly about wearables. It’s about building a new business on aspirational products now that its original aspirational products have become accessible. Will Apple succeed at this, or will annual updates of thinner, more capable devices from this new luxury side take the same path to the mass market as every other one of Apple’s successful premium devices?
An earlier version of this post misquoted Tim Cook and has been corrected.
May 31, 2013
I don’t remember exactly where I first encountered it. But at some point in the past three years, I, along with a large contingent of user interface designers, fans, and industry followers, learned a new word: Skeuomorphic. A skeuomorph in the UI world, so the popular definition goes (even if the rigorous scientific definition of the word makes “skeuomorphic design” an oxymoron) is a GUI or elements of a GUI that borrow from a physical analog of their functionality. It seemed like a useful concept in defining the execution of user interfaces, with an added benefit of sounding exotic in conversation – but I think we may have outgrown it.
Everyone loves a feud
Somewhere along the way, this newly-popularized concept gained that one thing critical to capturing our collective imagination: a foil. Where skeuomorphism was tied to the familiar, the tactile, the rich, the warm, this dark horse was divorced of the familiar, lived in platonic ideals, was simple, cold, mathematic. Despite drawing most of its theory from the 20th Century’s signature design movement, Modernism, it was nonetheless given its own, rather less impressive (and more prescriptive) name, “flat design.”
As the story goes, each of these poles had its champion, with Apple raising the varnished-oak banner of its increasingly unified mobile and desktop design language, and Microsoft carrying the solid, rectilinear flag of what was briefly but indelibly called its Metro design language. A war was brewing in the UI design world between flat and skeu: Apple’s rumored “move to flat” would stir more design-office conversation than a betrayal in Game of Thrones.
Not so flat
There is a problem with this narrative. Much of the interaction that users have with iOS devices is with UI elements having no physical analogs apart from the most basic, localized physical metaphor, the button – most of these are even just black Helvetica on a white background. And for all of Microsoft’s eschewing of texture, shading, and object references, one cannot escape that its many boxes and encircled icons ultimately draw affordance from our associations with a physical object, the humble pushbutton.
So if much of iOS is “flat,” and Windows Phone is loaded with a thousand tiny skeuomorphs, what are we left with? An important realization: “Flat design” is not nearly as flat as it looks. Skeuomorphism is a critical part of interaction design, and is everywhere.
How, then, do we verbalize the many clear differences between the examples of iOS and Windows Phone? The answer is to build a more nuanced framework than “flat” versus “skeuomorphism.”
Building a more useful vocabulary
Instead of imagining a fun-to-follow, yet ultimately empty battle between the forces of skeuomorphism and flat design, a more productive pursuit would be to construct a vocabulary around the toolset these concepts offer. Here are a few tools I see:
Functional object reference (“skeuomorphism”)
This is the sort of visual metaphor that ties an object from the physical world to a virtual tool. Ideally it is for purposes of building affordance from familiarity (turning pages in iBooks), but it can easily be misused (non-functional pages in iOS Address Book). Regardless of how realistically it’s rendered, a physical object can be useful as a reference so long as it is recognizable by the user and responds in the same way.
The difference between this (which you may call skeuomorphism as well; I won’t stop you) and the previous example is that the metaphor is implicit, if present at all. Ideally, there is some implied metaphor: Apple’s Game Center may not actually play backgammon, but its material references to a vintage board game case can put the user’s mind in a conceptual space for gaming. Often, this tool is simply used for decoration, but tasteful decoration can still aid user experience.
Whereas the previous two tools always use concrete references, depth cues may or may not have such a clear analog. Their primary purpose is to imply what can be done if the user interacts with the controls they adorn, not referencing real objects themselves, but rather mechanical aspects of them.
Shape / color cues
Contrasts in shape and color are often used in conjunction with depth cues to further increase contrast and create visual hierarchy. The trend of “flat design” is to use them with minimal application of the other aforementioned tools, which can be successful so long as the application of shape and color contrasts are sufficient to create an affordance of user action.
Addition and subtraction
My current one-liner for when the subject comes up is “‘flat design’ is approximately as useful a term for user interfaces as ‘red design’ or ’round design.'” Far from being a shot at the popular aesthetic of relying heavily on it, though, it’s meant to provoke thought: Flatness and depth are tools, just like color and shape, affordances and analogs.
Don’t just practice flat design or skeuomorphic design. Use the tools that are right for the interface that’s right for your users.
November 30, 2012
I recently set aside my aging iPhone 3GS for a new iPhone 5. Naturally, the latter covers all the bullet points expected of an update to a consumer electronic device: It’s faster, thinner, bigger-screened. Yet as much as these iterative advances may improve the day-to-day experience of using of the device, they actually add up to a tradeoff.
One gives up several things along with the exchange of a 2009 smartphone for a 2012 smartphone. It might sound obtuse to say the things given up include low pixel density and time spent waiting for things to load, but these are more than annoyances made perceptible by the march of technology: They are connections to the medium. They are the signatures of the technology we use, bonds to time and place forged in memory; over time they become the familiar sensations of home.
In exchange for these connections to the medium, upgrades give us abstraction from it, the ability to perform tasks less encumbered by the technology’s inherent compromises.
Dissolving with the pixels
The history of raster-based computer displays may be seen as a single thread of increasing medium-abstraction from the technology’s earliest green-phosphor text terminals through today’s Retina displays. The experience of using the oldest screens was deeply connected to the limitations of the technology: Far from reproducing photographs in the millions of colors discernible by humans, images were limited to a single color and two intensities; even such screens’ greatest strength, text, was far removed from capturing the subtleties of centuries’ worth of typographic refinement. In the use of these technologies, the medium itself was ever-present.
As graphics technology improved over the next few decades, the technology itself began to abstract away as images could be reproduced at greater fidelity to the human eye and typography could be rendered with at least a recognizable semblance of its heritage. With high-DPI displays, the presence of the medium is all but gone – while dynamic range and depth cues may yet evade modern LCDs, the once-constant reminder that you are viewing a computer display has become so subtle as to have disappeared.
Computation, time, and distance
Every time you wait for a computer to catch up with you, whether it’s a second or two for a disk cache or an hour for a ray-traced image to render, you experience a signature of the medium in which you are working. Waiting for a document to save in HomeWord on an 8088 was a strong reminder that you weren’t dealing with paper. Invisible, automatic saving in Apple Pages lends a physicality to the document on which you’re working, abstracting the volatile nature of the medium.
A significantly faster network connection, such as the leap from 3G to LTE, further abstracts the already unimaginably-abstracted distances of the Internet. As this abstraction increases, our expectations adjust accordingly, pointing to a change in our mental models. I still recall that first time in the 1990s when I loaded a web page from outside the US, imagining the text and images racing over transatlantic cables as they piled up in the browser. A 20-megabit connection leaves no temporal space for such imagination.
The last one you’ll ever need
For the past two years, during the ascendancy of “retina”-DPI displays, it has seemed plausible that the industry is at last approaching a point in display technology where further innovation won’t be necessary—displays could be “solved,” having reached the apotheosis of their abstraction. As Moore’s Law continues to conspire with faster networks and better UI design to melt away all the other aspects of the tool-ness of the digital tools we use, our consciousness of those tools predictably becomes less pronounced. In the long run, more responsive, more reliable, more accurate, more abstracted interfaces trend toward invisibility.
Given enough time and enough iterations, can the technology and design of an interface simply be solved, in totality, like the game of checkers? Can it be abstracted away entirely, leaving perceptible only user intent and system response? Can we ever become truly independent from a medium—visual information matched with the limits of human vision, latency for every network request below the threshold of human perception, and a UI with nearly zero cognitive load?
When we’ve lost the last traces of the “computer-ness” of a computer, will we have lost something meaningful? Or will our only loss be of fodder for nostalgia?