Responsive Design for the Internet of Things

We need a responsive design framework for the Internet of Things: a set of procedures, approaches and best practices that guide the design of the user experience in a way that’s responsive to the “real” instead of the screen. I’ve been calling this approach Responsive-R (for reality).

Responsive-R is an acknowledgement that there’s a major new influence on how a user consumes media experiences: the real world.

Where responsive web design tackled the challenge of screen and form factors (touch versus mouse, phone versus tablet), Responsive-R needs to tackle the challenge of physical context in how media experiences adapt to the user.

A Cross-Disciplinary Language for Design
Responsive design for the web has given us a paradigm that gets programmers, front-end coders, UX designers and the biz-dev guys a way to share a language that crosses disciplines.

It’s this cross-disciplinary collaboration that has, in part, driven its success: namely a widely shared understanding of what it is, and the code and the (admittedly evolving) best practices to back it up.

For the non-developer, the idea of responsive design is relatively easy to understand: web content now appears on lots of different screen types, so you should design your content so that it ‘adapts’ to those screens. For the front-end designer, it’s a paradigm change: the static screens you put together in Photoshop won’t be static once they’re coded up so you need to apply a different lens to how you tackle design problems.

Responsive design challenges the business, creative and strategy teams to make decisions about mobile-first and the business benefits of content hierarchy, while the developers can translate all these decisions into the code that solves the tough challenges of media breakpoints, bandwidth and client/server-side responses to device types.

Bridging Strategy and Execution
Hand-in-hand with the cross-disciplinary language is the bridge that responsive design builds between business strategy and user goals and personas, and the tactical considerations that can also be driven by data and domain expertise.

In my opinion, responsive design isn’t truly responsive if all it’s doing is setting breakpoints for screen size. True responsive design evaluates user goals for different screen-driven contexts: the priority of content when I’m looking on my phone might be quite different from looking at it on a website.

If I’m on my computer at home looking for a restaurant I have different expectations: I’m looking for richer data, photos, mood, user reviews. But if I’m walking down the street and I come to a restaurant site from a Google Map my priorities will usually be radically different: I want the restaurant’s hours of operation, snapshot ratings, menus, cost and address.

It might be exactly the same content across the two devices but it’s presented in a different hierarchy. The strategic decisions are driven by understanding the user and the coding executes this user-focused vision in how you execute pushing and pulling rows and columns in your hierarchy and how you manage ‘show/hide on mobile’.

The Feature Sets of Responsive Design
So, first, Responsive Design can be strategic and user-focused. Second, it represents a set of features and approaches that become embedded over time in standard coding and design practices.

Frameworks like Bootstrap and Foundation give us tool kits that embody decisions and best practices and make it easier for the rest of us to follow along.

Our good friend Wikipedia does its usual admirable job articulating the consensus in describing key concepts related to Responsive Design: Audience and Device Aware (ADA); mobile first, unobtrusive JavaScript, and progressive enhancement; and progressive enhancement based on browser-, device-, or feature-detection.

The tactics by which it achieves this include fluid grids, flexible images, media queries and server-side components to aid load times and bandwidth.

Responsive Design Meets Native Apps
On a side note, responsive design isn’t just for websites and isn’t a stranger to native app development. For example, while the paradigms of object-oriented programming don’t easily lend themselves to responsive iOS development, Apple is force-marching its developers towards more responsive designs (and, I’d propose, different phone, tablet and TV screen sizes) through a more robust API for dynamic text, auto layout, constraints and a renewed emphasis on Storyboards in its launch of iOS7.

Tackling the Real World
Responsive-R is an extension of the Responsive Design methodology. It’s both a philosophy and approach and might one day represent a set of shared best practices and code frameworks for its deployment.

At its simplest level, Responsive-R extends the audience awareness, progressive enhancement and feature-detection of responsive design into the physical world.

If responsive web design represents the philosophy and method by which we recognize that users access content on different devices and at different bandwidths, Responsive-R represents the philosophy and method by which we recognize that users will access content in different physical contexts because they are now connected to the Internet of Things.

Let’s Have Coffee: An Example
A good way to understand this idea is to give an example: with Apple’s iBeacon(TM) framework you’ll have a Bluetooth LE device that broadcasts a signal to the world around it. Your phone detects that signal and ‘wakes up’ and does….well, it does “stuff”.

You walk into a coffee shop and when you arrive your phone welcomes you (via an app). You walk to the shelf to buy a gift coffee mug and you get a coupon. You walk up to the cash and you’re given a “punch” on your loyalty card.

All of these actions are facilitated by beacons in the space around you. Behind the scenes, they’re further facilitated by the iBeacon API by Apple. And inside that API the actions are being triggered by a bunch of things, including CLProximity, which tells you whether the beacon distance is unknown, immediate, near or far.

In other words, your app knows how close you are to a beacon and can deliver content based on that knowledge. The beacon represents a physical object in the real world and it lets you calculate your user’s proximity to it.

The Responsive Analogy
In this example, the analogy I’d use is to media breakpoints. CLProximity is similar to the ability to detect the screen size that your user is accessing your site on.

But knowing what screen size they have and doing something about it are two different things.

In non-responsive design, you’d create separate sites for mobile and web. In responsive design, you’d adapt a single source of content and it would fluidly ‘change’ based on the screen size of your user. (If you’re on a computer or laptop, shrink your window now and you’ll see how this site fluidly ‘adapts’ to different window sizes).

Now…yes, there’s always an exception to the rule. Even in responsive design there are instances where you’ll design completely different experiences for mobile that run off completely different code, HTML, JS and CSS files. Payments are a good example of this: the shopping cart that works on a large screen can’t be easily ‘scaled’ to smaller screen sizes.

But in the analogy of CLProximity, it seems like the evolving paradigm is to deliver different experiences at different ranges: the equivalent of delivering a mobile and a non-mobile site, but in this case delivering different content experiences for “near/immediate and far”.

With a responsive philosophy of design for beacons we might instead create a single experience that adapts itself based on proximity. Using the analogy of “push/pull” and “hide/show on mobile” our interface might pull BACK the payment or loyalty layer of an app and then pull it forward when you get closer to the cash.

It’s the difference between pushing out three view controllers for the three proximity levels and pushing out a single view controller that dynamically adapts based on the same information.

OK, Hold On. That Sounds Like a Lot of Useless Work
Maybe. But it’s when we extend this concept of responsive design to other contexts that it starts to get really interesting.

Let’s come back to the concept of progressive enhancement, for example. This is the idea that if a user is accessing your site on a mobile device there are constraints, and you should be adaptive to those constraints (bandwidth, image resolution, etc).

Aren’t there also constraints in the real world? For example, let’s say that you want to deliver a video about how your coffee beans are grown and have your customer’s app trigger that video when they’re looking at the shelf. First, you’ll use responsive design patterns to stream that video in a way that recognizes their device (iPad vs iPhone, say, or low versus high bandwidth access).

But what about other factors in the physical world? Does your customer have ear phones? Is the coffee shop noisy or quiet? Is it a busy store or one of those relaxed places where people hang out all afternoon?

It’s not that these factors weren’t there before, but now the capacity of sensors in the phone and in the devices and beacons which the phone connects to make the data available in much the same way that media detection made responsive design possible.

As you start thinking about devices connected to the Internet of Things, you immediately start to realize that your idea for a video about coffee beans could easily have a dozen user contexts in which it will be consumed. And those dozen different contexts might all be in the same store.

Using a non-responsive approach, you’d give the user buttons or options, maybe you’d have a “read the story” and “watch the story” button.

But in a Responsive-R approach, you wouldn’t make the user do all that work.

Because what if you KNEW that the customer didn’t have headsets on, what if you KNEW it was noisy in the store, what if you KNEW that John usually only spends 3 minutes coming in for a quick coffee while Cathy is there all morning?

There might be a single piece of content – a “visual story about coffee” but you would adapt that single piece of content based on factors in the physical world…adding captions or screen overlays when you can’t listen to the audio, and ending the video at the 20 second mark when it knows that the user doesn’t have a lot of time to spend but playing the entire 2 minutes when they’re out browsing.

In Responsive-R the design paradigm is to serve up content that isn’t just based on things like media breakpoints or device capability, but to serve it up based on factors in the physical world around you.

So What Does A Responsive-R Approach Need To Include?

Like responsive web design, there’s a combination of philosophy, analytics, code and design that comes into play when we think of Responsive-R. Add to that the limits of current mobile operating systems, concerns about privacy and security, and you immediately realize that Responsive-R won’t come gift wrapped, it won’t apply to ALL cases, and it isn’t easy.

Right now, I’m finding it useful perhaps more as a philosophy. It asks us to think about the user in the same way that responsive web design considered the changing landscape brought on by mobile devices.

Which is all a very long-winded way for me to preface the following by saying: I’m not exactly sure. And that maybe we need to figure this out together.

General Principles of Responsive-R

Responsive-R is a paradigm which proposes that we design user experiences in a way that:

  • Is adaptable to the increasing number of devices that consumers are using the access content (watches, phones, tablets, glasses)
  • Integrates the central role of other devices such as beacons, monitors and sensors in providing data, context and information
  • Should integrate security and privacy as a central factor rather than as an after-thought

Responsive-R Utilizes New Data Sources to Craft Optimal User Experiences

Just as in responsive web design the detection of media screen sizes and bandwidth underpin the ability to deliver fluidly-adapting content, the Internet of Things is giving us new data inputs that will inform decisions about the user experience. These data sources include location, temperature, motion, time of day, speed, vertical and horizontal position, biometrics including breathing and heart beat, tone of voice, and language. Additional contextual information driven primarily by cloud-based data (such as sales, CRM, competitive information and other information) make the design experience even more challenging.

Responsive-R Optimizes Viewing Experiences Based on Physical Context

With this data, user experiences can thus be adapted based on their goals and context. The experience will adapt based on their physical location, the user’s situation in that space, and their access to device capacity. These characteristics include the noise, lighting, and shared/private nature of the space; whether the user is or has been moving (running, walking); whether the physical space has any impact on the device’s capacity such as sound, screen brightness or other factors; and whether the user or devices are connected to other users and devices.

Responsive-R Prioritizes Fluidity and Adaptability

Not all experiences derived from connected devices will use a Responsive-R paradigm. But the key premise of Responsive-R is to prioritize the user experience and goals and to adapt rather than repurpose content for different user touch points.

In Responsive-R, designers will consider how a single piece of “content” will scale in/out, push forward/backwards, and progressively enhance and degrade based on the user context. For example, instead of creating 4 DIFFERENT videos for a location-based app, Responsive-R considers how a single piece of content can be progressively enhanced in its resolution, use of text, use of layers and use of audio based on user context (ambient noise, access to audio, etc).

Responsive-R Uses Other Connected Devices In Approaching Problems of Progressive Enhancement

The devices that are connected to the “Internet of Things” aren’t just transmitters or receivers, they’re part of the potential solution when we make design decisions to aid  progressive enhancement, bandwidth loads and access. Responsive-R doesn’t just make server/client decisions on how to push out and manage content, it also uses the devices in the physical world as components onto which capacity and bandwidth can be offloaded.

In other words, the devices are clearly part of the network stack that supports the user experience – pushing off bandwidth loading, Internet access and other protocols to the devices themselves as part of our application stack. (We need to break the “app/cloud” paradigm to include devices as part of the network threading).

Responsive-R Treats Privacy As A Component Rather Than A Policy

User choice, localization based on regional privacy laws, and the possible impact on user-uptake (or abandonment) make privacy a key consideration for the Internet of Things. But in Responsive-R, privacy isn’t a policy issue, it’s an approach and toolkit with the same level of equivalence as media breakpoints or feature-detection.

User content should naturally degrade or enhance based on localization and user preference for privacy, and designers should think about how the content and experience will fluidly adapt and change based on specified privacy “breakpoints”.

Frameworks, Best Practices and Approaches to Responsive-R

Just as responsive web design has given us the grid, the media query, flexible images and RESS, we’re at the very beginning stages of developing toolkits and approaches to support Responsive-R.

Moving forward, we’ll need best practices and enhancements to existing APIs and frameworks that include:

  • The equivalents to media queries for environmental factors such as ambient noise, temperature, etc
  • APIs and frameworks that allow graceful degradation and progressive enhancement of content based on user-initiated or local regulatory considerations
  • The equivalent of flexible grids that are driven by their connection to a standard set of protocols for context-specific situations. In this instance, I imagine things like standard libraries of dynamic View Controllers in iOS that are driven by their connection to the M7 processor capabilities, beacon ranging and detection, etc.
  • Libraries that translate the data from devices into meaningful APIs that are focused on adaptive user experiences. This is the middleware between pure data (e.g. temperature) into meaningful short hand that can be used in a similar way to media breakpoints in responsive design.


For now, the idea of Responsive-R has been our own little internal jumping-off point into new ideas and approaches. It might just be back-of-the-napkin kind of stuff, and I’m sure there’s other work being done in how we think about the ‘operating system for the real world’ that we’re now creating.

What’s amazing about this time is how much potential there is to get the user experience right (and wrong)…and to think about a future in which the design paradigms for the Internet of Things arise from our collective imagination and effort. Done right, we’ll not only make our own lives easier by tackling these larger questions of design methods, we can potentially also make the lives of our users richer and more useful as a result.

(By the way – don’t be shy. This is a lot of random thinking and I’d love to hear your thoughts in the comments below, or e-mail me at

Jump Onto Our Mailing List
Why not join our mailing list for ‘BEEKn unplugged’?
And check out our flashy awesome company site too.

Be the Beacon!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s