We’ve been experimenting with scale and VR for some time now.
About 6 months ago, I noticed that M in their work was consciously using three scales, thinking of them as Wearable, Holdable, and Be-in-able. I’d been making things conceptualizing a similar three scales for similar reasons for months, thinking of them as Eyes, Hands, and Feet. So we started to talk about it and untangle some ideas from our intuitions, and have been working with the three scales of Wearable, Holdable, and Habitable for the past six months.
Three scales: small, medium, and large, but not just any arbitrary three sizes—the three scales are based on specific functions of our interactions and perceptions. We’ve identified some design practices we’ve been using, and the ways in which our thoughts about scale are based on the theory of embodied cognition. Keeping in mind that we intend this more as a helpful taxonomy rather than as a law of the universe, lets talk about the three scales, and review some of our earlier work on scale using this framework.
First, a note on scale skepticism
I used to be a scale skeptic. After all, this is Euclidean space, something can be any size and still have fundamentally the same geometry. Sure something might be bigger or smaller than we are, but once we perceive it and it’s in our heads, it’s all the same, right?
I’d try to act unimpressed at architectural-scale artworks, convinced their only appeal was their exclusivity, the cost of materials and space being prohibitive to most artists and collections. I’d scoff at the idea of “bigger is better” in all things, how consumerist, how American, how boring!
And same with miniaturization—sure it’s cute and all, and maybe the skills required to create such tiny details are impressive skills, but does that really make the object itself any different in a conceptual sense?
VR democratizes scale. Now anyone can make anything huge at the touch of a button, no expensive materials or real estate needed. Anyone can shrink something to be tiny and delicate in a way that would require inhuman precision to fabricate in real life. My expectation, 3.5 years ago when we were just getting started researching VR, was that we’d find a greater fluency with scale, that we could change the scale to any random size depending on what’s useful, but that fundamentally it’d all be the same. There’d be no more special artistic value in making something large or small.
Three years later, I’m happy to report that not only have I developed a much greater appreciation for scale, but that through many experiments we’ve started to come up with a model for how to design for scale in a rigorous and conceptually meaningful way.
Why Scale is Meaningful
The mathematician part of me insists the body is an independent observer that takes in information from an outside world through objective sensory tools like eyes and ears, with any bias being introduced after the fact by faulty logic in the brain. As so often happens, science has its say and ruins what seemed so simple and clear.
The body is more than just a tool for perceiving an independent reality. The actual incoming data from the outside is filtered by both conscious and unconscious processes before it reaches the parts of our brain that decide what it is in the first place. What the world looks like, feels like, sounds like, depends on our intentions and our attention, in various feedback loops that make it a wonder that human beings seem to have anything like a common experience at all.
What that means for scale is that our perception of something large enough to move around and inside is mediated by the part of our brain that controls those functions. Our perception of a tiny object with tiny parts that can be overviewed at a glance is seen through different eyes than one that we have to move around to see all of. Something with parts so tiny they are felt only as a texture on the fingertip and require special tools to manipulate is different to the brain than one with big enough parts to be grabbed and placed around a table, which again is different from one with parts that must be bodily picked up and carried around one by one, possibly with the help of large machines.
And so while in theory there’s an infinite range of scales one could use, we’ve found ourselves working with three in mind, based on what different functions of the body want to work with, and given VR/AR’s current capabilities of room-scale movement, head tracking, stereo visuals, and basic hand controls.
Holdable Scale: Local Manipulation, Body-sized interactions
We’re starting with the middle scale because that’s the scale we usually start at when creating virtual objects. At the moment, at least, this gets done using hand controls with spatial tracking, or direct hand tracking. It uses macro movements within the vicinity of the body where it’s easy to reach and move the arms without getting tired, and also the smaller movements of the fingers on controller buttons or through gesture tracking.
We grab things and put them together, turn them around, stretch our hands apart and squeeze our fingers together, occasionally leaning over and reaching for something but generally staying within what M calls “the hug zone” to avoid fatigue.
It’s interesting to note that in VR, as in many real life creation processes, the macro movements of arms moving hands through space goes together with the smaller movements of the fingers. This is in contrast to normal desktop workspaces that are finger focused, requiring small movements on the keyboard and mouse while ignoring the hand’s wider spatial capacities.
With current VR technology hands and fingers get most of the work, but conceptually I think this scale includes more of the body. At medium scale you wouldn’t plan for using your feet and legs to walk around, but they could be included in smaller motions from the body, as when playing piano or organ, using a sewing machine, driving a car, or just generally pushing around or picking up objects. It’s not all about the hands, which is why I’ve revised my original personal categorization from “hand-scale” to “holdable-scale”.
The mouth, also, gets used at this scale when working in real life. Often to temporarily hold an object while grabbing something else, or to blow off debris. I’ve seen a few VR apps that are able to detect a blowing noise, which is promising. But I haven’t seen it used in a general way, only specially programmed specific object functions, so I’d like to see someone experiment with having blowing be useful as a general tool when creating or interacting with objects at medium scale. Maybe for nudging objects slightly, as that’s a function that’s expected at this scale when using real hands, but often a challenge when working in VR/AR, due to the lack of the stabilizing forces of friction, weight, elasticity, etc.
And much of the real body is used to nudge at this scale in real life. Noses, chins, forearms, knees, and toes. Whether to clear a little space here, or see what’s under that rock there, or shift an errant bit of cloth. Nudging is an important part of our local control over our environment.
The medium scale is often a working scale, an interaction scale, rather than the natural scale of the object we’re interacting with. Things made at medium scale often get either shrunk into a conveniently small object, or expanded into an environment.
Wearable Scale: Glance, Collect, Sort
This is the small scale. You can’t really manipulate a small-scale object’s parts, beyond maybe a few basic things. You can turn it around and look at it, you can have it interact with other things, you can put it places or wear it. It can be seen at a glance, although this glance can be done with the tips of fingers as well as eyes, or other fine-scale feelers such as the lips or tongue.
I originally thought of this scale as the scale for the eyes, or as the “collectible” scale. But I’ve hopped on over to M’s terminology of “wearable” scale because we’re moving towards body-based storage, the idea that the ease of storing and finding things based on where they are on our bodies is perhaps a more natural user interface even than storing things in rooms (M shows a demonstration of this idea in their VR notebook tour).
Wearable-scale objects are like textures. Their many components are perceived as a whole, and that wholeness only gets changed through special equipment, or by scaling it up to medium scale in VR, or by turning it into a part, as when small objects get manipulated at medium-scale as part of medium-scale objects. Any functions within the small-scale object are abstracted and represented by the object itself.
The small scale is more like the big scale than you might expect. At small scale, one of the identifying features of an object is where it is. There may even be collections of them, categorized and sorted into drawers and shelves, or on different parts of the body, or in pockets. We use our amazing spatial abilities to be able to collect and sort and remember where small objects are, throughout our houses and bodies and workplaces and world.
We also use medium-scale interactions to work with small-scale objects. We know how the body moves locally when it uses the object, and how the body moves through space to find it. These interactions are conceptually attached to the object and part of what makes it what it is. An earring is a thing we put on our ear. A tool is a thing we put in our toolbox. A cup is a thing we use like a cup and put in the cup cabinet, no matter whether it might otherwise be a jar or a vase. We impose part of our understanding of small objects from the outside, by choosing how we use it and where we put it.
Habitable Scale: Move, Live, Organize
Large architectural-scale things impose their own paths of motion. We move around and through them, rather than staying still and moving them. Mathematically, I want to believe that isn’t a real distinction, that it’s all relative. But to the brain and body, there’s a real difference between whether we move around something or it moves around us. A Habitable scale environment is fixed, it is the reference point, and we move within it according to its design.
Part of a large-scale object is how it lends itself to organizing smaller things within it. Habitable-scale objects create their own contexts, provide their own tools. A house or a workshop might come to mind. But now, with VR and AR technology, more kinds of things can have habitable scales. If I could walk around inside my art supply cabinet instead of reaching into it from the outside, how would I organize it differently? What would your phone interface be like if you could walk around in it? If I could walk around inside a hammer, or a formatting toolbar, or my watch, what would I want to be able to know or change about its properties, and how would I represent that?
I think the cost prohibitiveness of architectural-scale objects, and the reality of physics and structural integrity, has made this scale the least explored of the three, and the most ripe for fundamental innovations.
There’s things much bigger than houses, and things much smaller than earrings. But I’d argue that from the perspective of body-based human-computer interaction, those scales aren’t a thing. We can’t use our bodies at the scale of the galaxy or of an atom. But we can shrink down the earth to the size of a map and put pins on it at holdable scale, and we can scale up a carbon crystal lattice to walk through at habitable scale.
A habitable scale ice crystal lattice depicted in The Magic School Bus Rides Again
You could imagine in-between scales too, and I encourage you to do so! We’ve found these three scales to be a useful structure to design within, so that we don’t have to start from scratch every time or create an entire new framework for every interaction. We think it covers the space while still being minimal.
We’ve got lots of examples of using scale in our work I’d like to review in this context, but that will have to wait until next time!