With screen-based design, it is often simple to employ and further be restricted by the techniques of print. When we interact with, for example, a magazine, we are presented with static pages. Layout largely dictates how accessible and enjoyable content consumption will be. On each page, we are responsible for learning and understanding what the page is trying to communicate to us.
Designer’s (should) love this challenge. As a designer, when I work to build mobile apps, I ask myself how much I can control this process with my layouts. Much to the designer’s dismay, experience is largely subjective and personal. How then, can we take advantage of the screen to further control and ultimately improve the experience during mobile app development?
Last time, we took a look at responsive apps from the perspective of speed-to-content; how quickly and elegantly can we move from context to context? Magazines are pretty good at this. Turn the page and BAM, instant results.
If we consider one common trait of our species (and many others), learning empirically requires time. If a person is presented with a large amount of data at once, that person is at a process crossroads. As part of the intuitive nature of learning, we take away and categorize content in more consumable pieces. Will we methodically pour through content on the page or zero-in on exactly what we want? Can we find ‘it’ within our tolerable threshold and is ‘it’ even worth the effort? Layout can greatly help with this process, but is not always successful.
By using time, we can improve presentation of content by supplementing layout on the screen.
Temporal feedback, or feedback over time, is an essential cornerstone of this concept. Through the use of motion design, we will take a look at further enhancing the user’s experience by responding to interaction in unique ways. These techniques can improve user learning, strengthen design voice, and make an app more fun to use.
Unfortunately, it’s not that easy to say ‘do this’ when we have many hurdles to overcome.
- Design and implementation takes time. Motion design and animation takes more time. Learning tools takes even more.
- Animation can be difficult to communicate, especially during the design-to-implementation process; lacking foundation concepts and vocabulary is an additional barrier.
- Bad animation is counter-productive. No animation is better than bad animation.
How can we overcome these? The basics are pretty simple:
- Storyboard and prototype what happens when a user interacts, beyond static screen-to-screen shift flow. Think of the segue as an opportunity for a graceful, softer step, in a dance moving through a song.
- Subtle motion is often the most effective, non-intrusive, and easiest to communicate and implement. It certainly helps that Titanium Mobile makes this incredibly easy, including rapid prototyping.
- Today’s tools are cheap and easy to use. At the start, pencil and paper will work just fine.
- Storyboard: A linear series of static pictures (most often illustrated) that indicate what is happening over time. These points in time are critical to outline, as they communicate overall motion objectives and themes.
- FPS: In animation, time is represented by frames. We relate that time to seconds, minutes, etc. by determining how many frames will play back in 1 second, or FPS (frames per second). Most common is 24 (film) and approx. 30 (broadcast) FPS. Some animation will use a concept known as ‘animating on the halves’, going down to 12 or 6 to save money. In a real-time environment, FPS is ultimately determined by what is going on. Frames will be dropped (making for jerkier animation) if the environment is trying to process more than it can smoothly handle. With game development, delta timing is used to normalize interactions and state changes against possible fluctuations in the environment.
- Keyframe: Keyframes are used to indicate significant frames in an animation. With illustrated animation, keyframes are often the most detailed. In-between frames are typically less significant and detailed, bridging the gap between keyframes. With computer animation, keyframes help record state over time – like a stop motion snapshot. The computer can then automatically generate in-between frames. This is called tweening. It is not common to do frame-by-frame animation on a computer.
- Tween: In-between frames automatically generated by the computer to speed up the animating process. Titanium Mobile tweens for you when animating properties of a view over time.
- Easing: A technique for changing in-between motion. A good way to think of this is how you might make a dropping ball look when dropped. There is a keyframe at point A (i.e. your hands) and point B (i.e. the ground). In reality, elements like gravity and friction will change the way the ball moves between points A and B. A car going from 0 to 60 is also a good way to visualize easing. When an object moves from points A to B at a constant rate, this is known as linear easing. Titanium Mobile provides several easing options when animating a UI element.
- Masking: A technique for hiding otherwise visible content. Known as a clipping mask/path in print, this technique is most commonly used when non-destructively ‘clipping’ a rectangular asset (i.e. photo) into an irregular shape. Animated masks, blue/green screen, and rotoscoping are commonly used in animation and film to achieve similar or unique effects. Fun Fact: The term clip art originated from physical, pre-existing art assets being clipped by scissors to be used in layouts.
- Transition: A way for us to describe in-between motion from point A to point B. In motion design, many common transitions have names, like ‘dissolve’.
- Compositing: Think of UI objects as individual pieces of a collage. When compositing, we are simply building a stack of UI elements, with an overlap, order, and arrangement. The overall whole is a composite ‘image. If a timeline is involved, video (and audio) can also be part of the composite. Applying motion to a large number of composed elements is one of the most time consuming parts of motion design.
Interaction Storyboarding and Visualization
Today, we are working on a content consumption-based application (i.e. digital magazine) that is currently in design. Comps and layouts have been completed and we are now looking to breath life into a specific screen’s component with animation.
The component of focus highlights latest app content (top right), updated daily. Each content item of this component will have an image/photo and copy. Because this component is so crucial, additional focus will be given to each content item as they are presented to the user.
Depending on how we layout the grid, it is important to consider how the user might hold the device during a passive or resting state. What would they need to do to interact with content on the screen? When switching from passive to active, does the user need to re-orient their grip to reach promotional content?
- Each piece is individual. To emphasize this, we can animate, independently. Each content box is numbered in order of importance, so it might make sense to animate one at a time, in this order.
- Each piece belongs to a greater whole, so the animation applied to each content box should be consistent in theme. This is where animation can go very wrong; i.e. each animate ‘differently’. In most ways, motion design is design in that we should follow foundation principles.
Let’s head to ‘paper’ (I’m using Illustrator) before jumping straight to a tool. Below is what our animation might look like in concept, as a series of storyboards or pre-visualization. Time-to-complete details are listed, but may change once we see it in motion. Other details are absent, as ‘look’ of motion can be difficult to visualize without tools or prototypes. Fun Fact: Previz or pre-visualization is used during pre-production and production, on CG or complex scene-heavy films, to help bridge the communication gap (and eliminate potentially expensive mistakes) between traditional storyboards and more complete shots.
- Tumulut Hype
- Apple Keynote and Microsoft PowerPoint
- Adobe Flash Pro, Edge, AfterEffects, Premiere, etc. Photoshop is even capable of animation.
- Apple Final Cut Pro
Fullscreen is recommended…
When experimenting with animation, it is important to identify and categorize goals and limitations:
- Goal… What are we trying to achieve?
- In this scenario, we are staggering each part of content item presentation to emphasize dynamic, latest content component.
- Time… How long will things take?
- Direction… With 2D, we can largely focus on horizontal, vertical: x and y. 3D needs to make sense, as we add complexity through an additional axis. Does the overall theme of the app support 3D?
- Properties… Make notes of what properties need to change and their values at keyframes (these are important for the developer).
There are many ways to achieve the above effect. We can even simplify the effect by using fades and opacity animation, which is common for quick-to-get results. Instead, we’ll make this a bit more interesting by using a transitional effect on the photo known as ‘wipe’. Text will use a simple position and opacity-based animation.
To achieve a ‘wipe’ effect, we will need to take a look at masking. Below represents two possible solutions, each providing a different photo reveal effect.
The first image is a true ‘wipe’ reveal, where the content is already positioned, but is masked. The mask is animated, meaning that over time the mask is reduced in size, to present the image, below. This solution is more difficult as it requires an additional UI element. However, it is also higher performing than the next approach, as we do not need to animate raster, pixel-based graphics.
The second scenario uses a positional ‘wipe’ by eventually covering a visible background with an image. Over time, the image moves to cover what is below. In this scenario, the mask (parent) does not move. In our mobile app framework, Titanium Mobile 2.0+, parent views will automatically clip child views that are out-of-bounds, making this solution a bit easier and requiring less views.
Both effects are shown in the video below:
Fullscreen is recommended…
Passive vs. Active Interaction
Older video games were famous for what are called idle animations; one of the most popular being Sonic The Hedgehog’s impatient foot tap. Upon interruption of interaction, idle animations would occur after a certain period of time.
The animation we have been covering, as with most transitions, are based on passive, indirect interaction. The user had no direct effect over the content box animation. Nevertheless, the user did cause the animation to occur by navigating to the specific application context where the latest component resides.
Active interaction can reinforce what the user is doing, such as an indicator to where on the screen the user is touching; any visual or auditory elements that react to direct interaction. However, active interaction is action-focused. How does this help a user understand the content displayed on screen?
The more appropriate question would be: Is this application more active or passive? In action-oriented applications, the user is learning by doing. The opposite is true for passive-heavy applications. In the context of a content-consumption application, the user is learning by doing less.
Progression-based games are great examples of using both active and passive, temporal-based feedback. The player is focused on controlling the character, engaged in gameplay. However, with this direct focus, it is difficult to provide and consume status feedback. Instead of pausing gameplay to get the user’s focus, games take a clever approach of using highlight animation and sound when large events or milestones occur.
Initially, this can give the user pause, but over time, conditioning works its magic and the user easily connects meaning with that feedback; to the point of subconscious acknowledgment. Note: Though UI and gameplay design are different, they should be a part of a cohesive whole. Great gameplay won’t hide a poor UI and vice versa.
Unlike storyboarding for passive interaction, active will typically illustrate the use of a pointing device, like a mouse or finger.
Check out: The Windows Phone GUI does an excellent job of balancing temporal feedback, with passive and active interaction. My personal favorite is when the user scrolls down to the bottom of their live tiles list. The list will resist and settle. Additionally, the right arrow (top-right) will horizontally bounce, communicating to the the user that: ‘Hey, there is more this way!’ – In interaction design, this is often referred to as a form of teasing. Another common form of teasing on Windows Phone is content clipping or truncation.
Coincidentally, Microsoft is removing the right arrow in Windows Phone 7.8, to make room for more tiles.
- Learning takes time. Animation can lead the eye. Staggering presentation with temporal feedback can support learning and improve layouts by bringing static screens into view, in fun and dynamic ways.
- It’s more work, but with today’s tools and Titanium Mobile, value add can easily outweigh the time commitment.
- Subtle, short motion (250-500 milliseconds) design is often most effective and non-intrusive. On longer animation, the first time might be cool… the first time.
- Not all effects are practical or achievable. Opacity animation is the most common type of transition and it looks great. If it’s bad or doesn’t feel right, change or get rid of it. Animation should be organic, not jarring.
- Any applied design disciplines and techniques should be cohesive and consistent to the entire whole.
- Passive and active interaction should be accounted for. Will the user be more active or passive? This will illuminate which area should receive more focus.
- Connect interaction storyboards to other planning elements, like wireframes, UI prototypes, user story diagrams, flowcharts, etc.
- Consider time we can’t control, such as a request to a remote image, where delivery timeline is determined by the network.
- Read more about how to use animation with our mobile framework, Titanium Mobile.
As always, please sound off in the comments section if you would like to expand on any content presented in this article. Until next time! 🙂
Fred Spencer is an Application Architect in Appcelerator’s Professional Services Group and is a digital media instructor with the Rhode Island School of Design Continuing Education department. He is currently responsible for the overall architecutre and design of the NBC iOS and Android apps. You can follow him on Twitter @anovice.