I’ve developed a passion for UX and I do my best to keep up with the professional literature on the subject. There’s one blog in particular that I keep coming back to: A Brief Rant on the Future of Interaction Design by Bret Victor. It’s almost a year old at this point, but I think the critiques it offers are universally relevant.
This post encourages me to think deeply about the future, and more generally about how we approach technology. I think Mr. Victor is absolutely correct – both in his critique of the currently popular vision of the future, but also, and more essentially, in his argument that our technological future isn’t something that just happens. It isn’t inevitable. We can choose where we want our technology to go – what we want to design and build and pursue.
It’s shocking to me how many people don’t see it that way. When I was talking with a co-worker about some exciting new technological thing, she mentioned that she feels like she really should learn more about it and how it all works because “that’s where things are going.” As though the future is some kind of inexorable force that we can’t control. As thought there’s some mandate that requires us all to keep up with it. This is how most people talk about technology and the future.
My co-worker came to me in the first place because I’m a tech geek and I think she expected to me to be all gung-ho about encouraging her to learn learn learn! w00t new tech! But that’s not how I reacted. What told her, instead, was this:
Learn as much technology as you need to be useful to you. Don’t worry about the rest of it. Technology is a tool that we should each use in whatever way is most useful to us. If certain types of technology or technological knowledge aren’t going to be useful to you, there’s absolutely no reason why you should have to know it, or use it. And the technological future isn’t set in stone.
There’s a fundamental assumption underlying the design of many technology products that has always driven me crazy: an assumption that the software is smarter than me. An assumption that the user is too stupid to know how to do things correctly and will only muck it up if they’re allowed to sit in the driver’s seat. These products control the interface and experience to a great extent, rather than allowing the user to control it, and expect the user to alter the way they perform their tasks to fit the rules of the software. In this paradigm, the software and product design are actually treated as more important than the end user.
I can go on:
- There’s an assumption that people should understand the underlying structure of technology in order to use it fully and profitably.
- There’s an assumption that you must have actively participated the last several years of development in order to know how to use the newest gadgets.
- Tech geeks – the programmers and coders and hackers (oh my!) who design and build the new technology – take it for granted that everyone already has a certain base-level of technological competence and largely don’t question the requirement for it… and it rarely occurs to us that we consistently over-estimate this base-level for most end users.
Unfortunately, this has been the reality for most of the Computer Age – and my experience tells me that I’m not even exaggerating it that much. There’s a constant fear that if we don’t keep up with all the newest developments, we’ll fall too far behind to ever catch up again. It’s amazing to me how many people become functional luddites, not because they want to be, but because they don’t feel like they know enough to be able to use new technology.
I think most people have bought into this worldview to one extent or another. I think most people truly believe that technology is smarter than they are and they should follow its instructions. In this modern world of rapid technological change, it’s very easy to feel like technology is out of our control. To feel like we have no choice – conform, or the world will pass you by.
This is not how technology should work. It should be the other way around. Technology should serve us and adapt to our requirements. User interfaces should be intuitive enough for anyone to pick up the newest gadget and use it successfully regardless of any previous experience or skill sets. As my father once put it to me – no one expects you to be a mechanic in order to get into your car and drive from Point A to Point B. Why should we have to know anything about the back-end structure of tech systems in order to use them?
This is the central guiding principle of good UX.
When I was in grad school, I took a class called “Library 2.0 & Social Networking Technology” taught by Prof. Michael Stephens. Prof. Stephens is credited with helping to create the Library 2.0 service model and is on the forefront of advocating for the integration of modern technology into library services. His detractors commonly dismiss him as nothing more than a rabid technophile. What struck me most powerfully in his class, though, was how completely wrong this detraction is – I lost count of the number of times that Prof. Stephens cautioned us against technolust and the mindless adoption of new technologies. He always emphasized the need to evaluate technology, to make sure that it serves us and actively improves our lives, before buying into it.
This illustrates another phenomenon that I see today: people who are truly knowledgeable about the inner workings of new technologies hold them in far less esteem than people who aren’t experts. We tech geeks may have the highest incidence of technolust, but we’re also the ones who see most clearly the limits of modern technology. We’re the ones who see all the different ways that it could develop – and all the ways that it fails.
We’re the ones who know best of all that things don’t have to be this way.
I think we need to do a better job of communicating that the rest of the world.