Recently, a coworker—knowing my fascination with UX and design philosophy—sent me a link to the following article:
Material world: how Google discovered what software is made of by Dieter Bohn (posted on The Verge on June 27, 2014)
Subtitled, “The next era of Google design is about software as substance,” it presents an intriguing take on where Google is heading in terms of their overarching design philosophy.
I love knowing that they’re thinking in terms of designing for how the human brain actually works—we need to be able to create mental models of our environment in order to fully function within it. That’s a psychological and neurological truth that’s been sorely neglected in the history of computer technology to date.
Whenever people talk about interaction design, I think about my mother-in-law. She can’t use modern tablets or smartphones, she just can’t figure them out. She’s a hugely smart and capable woman but she hasn’t used a computer for anything other than work (where she only uses software purpose-designed for the task) in a round decade—she assumes that she’s so far behind the technology that she can’t even start to figure out how the new stuff works. In many cases, this assumption turns out to be true.
I contend that it should be possible to create technological interfaces that are so intuitive that anyone can pick up a device and use it effectively, without any prior knowledge or experience of previous generations of technology. Interfaces that the human mind can apprehend and comprehend in the same way that it interacts with the physical world—but without the need for technology to slavishly mimic the physical world (skeuomorphism). Digital worlds that are as mentally intuitive to us as physical reality.
It makes me happy and hopeful that Google is thinking along these lines.
I have two concerns about this idea, as the article above describes it:
I understand that Google wants to remove the burden of work from the user in the interactions and services. But I have major qualms about Google requiring me to give them all of my data—I’d like to be able to use their devices and services without trusting them quite so much. Much of this reaction is a genuine fear about the loss of privacy. It’s not necessarily a criticism of the desire to make using their devices and services as easy as possible.
Part of this reaction is my ingrained response to what I refer to as the “Microsoft Philosophy”: I refuse to accept that the software is smarter than me—that it knows better than me what I want it to do. I refuse to accept that I won’t be allowed to use it on my terms. I sympathize and even agree with the desire to make technology as convenient as possible—but people should still be allowed to use it as they see fit and not be locked into modes of functioning that they can’t change.
I’m not saying that Google is planning to lock people in, but I can easily see their new design vision going that way.
To explain my second concern, I need you to read this:
A Brief Rant on the Future of Interaction Design by Bret Victor (posted November 8, 2011)
This is my absolute favorite essay about interaction design (I’ve written about it before). What I notice about this new guiding philosophy of Google’s is that:
- They’re still not designing for how our hands (and whole bodies) physically move and function. This new “Material Design” is still mostly just pictures under glass.
- The “metaphorical” material they talk about at the beginning—the idea that pixels can be more than just pixels—is wonderful and exciting… but I’m not convinced that it’s necessarily the best possible metaphorical idea. Given the sheer power of Google to determine our technological future, this could be another case of technology getting stuck with a sub-par design paradigm right from the beginning.
Overall, I’m really excited to see Google thinking about design this way. I just think we need more discussion and debate on the matter.
I, for one, (provisionally) welcome our (not so) new (anymore) Google overlords.