The story of technology swings from narratives of hope to a kind of collective despair and anxiety.
We create myths out of people who walk around in faded jeans and hoodies and in the next breath have nightmares about robots taking over our factories, our homes, our cities, our collective conscience.
We revere Steve Jobs or envy Mark Zuckerberg and then imagine the grip technology has on our children and worry about what it will do for their brains, their sense of creativity, their future job prospects.
The Tech World Looks Inward
For the past few weeks, the tech community itself has been gripped by a kind of angst-driven navel-gazing, not without merit, but also not for the first time.
Confounded by success stories that often come coupled with acts of hubris and sleaze, the tech world is like the kid in the school yard who’s asked by his teacher: “Why didn’t you DO something to stop the bullying” and is left struggling for words.
Elon Musk is worried that artificial intelligence will escape into the wild and cause our extinction. Peter Thiel (the very definition of “techy” and eccentric) thinks that Silicon Valley just doesn’t GET how the rest of the world fears change, fears digital transformation.
But it’s not just technology itself which is the cause of the current angst. The Valley (and beyond) has some accounting to do.
Here in Toronto, a truly diverse city, the tech industry has a diversity problem.
It leaves Nick Bilton at the New York Times wondering whether, in the absence of the investors stepping in, that government might:
Mr. MacDonald noted that part of the problem with tech companies is that, from a financial perspective, there is no incentive to do the right thing. Companies like Uber or Facebook have done the wrong thing in the past and still grown at staggering rates. But, he said, when companies go too far, there are two outcomes: either customers will find an alternative or regulators, with enough public outcry, could break up the party.
What Tech Gets Wrong
But there’s a logical fallacy at the heart of how most people in the tech industry think about technology.
They believe that fears of technology itself are misplaced.
Peter Thiel is the perfect example and embodies a kind of blinkered mind set that is all-too endemic to The Valley.
It’s an attitude that says that people are, well, basically stupid – they fear change, fear technology, and its our job as guardians of the cult/the guild to at least recognize how angst and fear-plagued the common man is.
The rest of the world hasn’t had a chance to taste the Kool Aid, but give them time, be more sensitive, they’ll see the light eventually.
If there’s a fault of technology it’s not the technology that’s to blame. It’s the people.
In this view, if Silicon Valley has faults, it’s not the tech which we should hold accountable but a few bad apples. It’s the misogynist founder or the frat boy CEO whose ruthless pursuit of success leaves a trail of scandals in his wake.
Empathy and Emotion In a World of Code
Om Malik, no slouch when it comes to thinking about technology, writes that it’s time for tech industry to add emotion and empathy to its products:
Having watched technology go from a curio to curiosity to a daily necessity, I can safely say that we in tech don’t understand the emotional aspect of our work, just as we don’t understand the moral imperative of what we do. It is not that all players are bad; it is just not part of the thinking process the way, say, “minimum viable product” or “growth hacking” are.
But it is time to add an emotional and moral dimension to products. Companies need to combine data with emotion and empathy or find themselves in conflict with those they deem to serve.
What Om perhaps doesn’t explicitly state is that the moral dimension of technology isn’t just embodied in how we use technology. It’s deeply embodied in how we build it.
And it’s embodied in our concept of who we “deem to serve”. Our focus is usually on the individual user. On growth rates and trajectories, sure, but mostly on doing what we can to release new features that will appeal to specific users.
This leaves us often ignoring the larger cultural dimension of what we build.
Boundaries and Features Build Our Moral Character
Technology isn’t devoid of moral choice, of ethical decisions, of emotion or empathy. It isn’t a neutral actor waiting for someone to pick it up and do either good or evil.
Technology itself has built-in boundaries and was shaped based on decisions that may have, at the time, seemed like purely aesthetic or user-driven functions but which on reflection had a moral dimension.
The Internet might be a set of features, but what was built in (or left out) has had an impact that lingers to this day.
Walter Isaacson outlines some of these dimensions in his new book The Innovators and we learn that many of the Net’s founders still regret that back-linking and the capacity for transactions wasn’t built right into the web.
These are things that might seem at first glance like features but which were also an implicit choice about the meaning of authority, transparency, exchange and the commercial value of content.
Features are what a user sees. The moral dimension of technology is what happens when those features move beyond the individual user into the larger cultural context of society.
The fact that the technologist was building for features and users doesn’t decouple them from the social context in which that technology takes hold.
Silicon Valley gives good lip service to that larger dimension. Every company is, after all, out to save/change/disrupt/transform the world.
Yet there’s often little connection between the CEO giving good press about the better world we’re all supposedly headed to, and the engineering team who are just trying to ship the damn product, and probably don’t spend a lot of time thinking about the cultural and moral dimension of what seem like a few simple lines of code.
The Moral Imperative of the Internet of Things
Today, we see a similar tug between different value systems as the Internet of Things begins its march.
From data-driven monster networks whose purpose is to collate trillions of data points to concepts of control, usability and opacity in the connected home there’s a battle for the next digital frontier.
While it may come disguised as feature sets, the decisions being baked in to connected products now will have an inescapable moral dimension in the years to come.
It’s easy to focus on whether a connected product can adjust the temperature of your home. It’s harder to focus on the cultural and moral dimension of buildings that know we’re present, of physical spaces that detect we’re around, of objects that can listen and ambient signals and sensors that are slowly starting to reshape physical spaces in real-time.
On the real-time web, your Facebook feed changes based on a thousand little signals – who your friends are, what cookies you have stored on your machine, what sites you’ve visited and who you’ve chatted with on WhatsApp.
On the Internet of Things, the number of signals will be profoundly greater.
And instead of it being just a web page whose content changes based on those signals, physical space itself will become increasingly fluid, flexible, real-time and data-driven. The products on the shelf in the morning will be different from the afternoon, stores will become “Uber-ized”, the boundaries between offline and online purchasing will disappear.
Beacons Aren’t Neutral Observers
And beacons are playing a leading role in how those first steps towards the digitization of physical space proceeds.
They’re bloody harmless looking things. They don’t do very much, really. They broadcast a signal.
And yet already, the moral character of beacons is becoming rapidly embedded.
Now, you may work with beacons or know someone who does. But when was the last time you talked about the cultural importance of beacons, their ethical dimension, the moral choices implicit in their design?
Sure, maybe we talk a bit about consumer privacy – “But beacons don’t COLLECT anything!”
In other words we use the same line as everyone else in the tech industry: it’s not the tech itself which is to blame, it’s the people who misuse it.
It’s still early days for these little devices. There’s a ton of innovation, from the chip set up to the ‘cloud’.
And within those innovations, we’re seeing a set of features being developed which, in the collective, will lead to limits and boundaries on future developers and users that will embody the field of moral and emotional limits that beacons represent.
Beacons Embody Belief
Now, I’m not sure the designers of the devices ever thought about the implicit assumptions they were making in their development.
They’re just trying to ship great products.
But having said that, I’ve come to know (and respect) a lot of the innovators in the beacon field.
And I can see their personalities and beliefs built into their products:
- For Estimote, the design, the casing, the tactile feeling of the beacon implies a belief that they have a visible role in our physical landscape
- At Radius Networks, there’s a collective wisdom in the crowd, in shared code, in granular tools which can be assembled at every-higher levels of complexity
- For the Wireless Registry, beacons and devices need to be able to authentically claim “I’m here, and I’m who I say I am” (although reconciling that vision to the motivations of device-makers will be a tough tough slog)
- Kontakt keeps extending its vision of a tightly engineered and coupled cloud, a kind of technocracy of code
- Gimbal is building ubiquity and security with itself as the gatekeeper of the nodes
- Google is trying to advance the notion of the physical web, treating beacons as just another URL that gets collated to the larger cloud
- Samsung sees beacons as commerce. (Right now, anything that gives them a commercial edge is likely driving a lot of decision-making)
And these examples are just a smattering of the innovation being built around beacons.
Every project we see, if you scratch deep enough, embeds cultural choice and assumption, whether in concepts of push vs ambient computing, the value of information versus social exchange, or in how a developer views whether connected spaces replace, enhance or disrupt traditional ideas of physical design.
Even beneath all of this innovation, these amazing cultural viewpoints, beacons themselves have built-in assumptions:
- The right to pair securely or openly broadcast
- The right to be uniquely identified
- The right to sit to the side of, but become embedded with, other Internet-based technologies
Bluetooth LE itself has made assumptions about its own moral dimensions – unwittingly, perhaps, but it’s there nonetheless.
They make assumptions about being able to tag ownership (through an ID name space), apply base assumptions to concepts of quality of life (through profiles covering things like heart rate monitoring) and defer issues of accountability and data ownership to other parts of the stack.
Beacons might not do a lot but the beacon protocols, the devices, the apps being built on top – each of them is coming loaded up with features and specifications that were built by people who have their own unique cultural prism.
Culture and Taste on the Internet of Things
“They don’t bring much culture into their product”.
Which isn’t, I don’t think, a statement about aesthetics. It’s a statement about emotion and empathy.
Om’s rallying cry deserves repeating: “It is time to add an emotional and moral dimension to products”.
Sean Gourley asks us to look at Big Data and think about stories:
“Data needs stories, but stories also need data. Data, when its put up in front of you as a number, it gets stripped of the context of where the data came from, the biases inherent in it, and the assumptions of the models that created it.”
But let me rephrase that statement for a world of beacons:
“The Internet of Things needs a narrative because it isn’t JUST about data or devices or code. Beacons can’t be stripped of the context in which they’re placed, we can’t ignore the biases inherent in their design or the assumptions built into the models that make their creation possible.”
The Internet had no built-in mechanism for verified identity, no way to pay for content, no reciprocity in links. You can’t understand the cultural and moral dimension of the Internet without understanding the biases upon which it was built.
And now with beacons, we’re already trending towards a world in which certain things are taken for granted: that “the cloud” will always be the leading paradigm, that transactional value will always be an add-on to the default tech stack rather than built into its DNA, that data matters more than stories, that apps are all there will ever be, that the physical world will always look like it looks today.
The Stories We’ll Tell
We’re creating a narrative about beacons. But as an industry we should continue to question, push, probe and extend the limits of what that narrative should be.
Because if we aren’t careful, the moral, cultural and social impact of the tech we’re all playing a role in creating will become ‘baked in’ without us really noticing.
And years from now we’ll blame the people using the technology instead of holding ourselves accountable for how the technology was built in the first place.
Beacons are representative of what I deeply believe will be a wave of technology that is exponentially more transformative than the Internet to date.
But there isn’t some secret group out there, there isn’t some governing body, there isn’t a team of people thinking deep thoughts about what it all means who will publish their conclusions one day.
You are it.
You’re the one shaping it. You’re the one whose decisions today will have a profound impact years or decades from now:
The shift from a generation that started out un-connected to one that is growing up connected will result in conflicts, disruption, and eventually the redrawing of our societal expectations. The human race has experienced these shifts before — just not at the speed and scale of this shift.
The guardians of that shift are you.
Ship great product. Make lots of money. Build cool stuff.
But let’s also take a moment to recognize the cultural stories our devices will tell, the limits or assumptions we’re building into our designs, and how well we express empathy and emotion in the simplest looking little beacon we build, tool we develop, SDK we launch, or experience we design. In other words:
Be The Beacon.
Share Your Thoughts
Thoughts? Comments? Drop them below.