0% found this document useful (0 votes)
2 views48 pages

3-Conceptualizing Interaction - Conceptual Model-01!08!2024

The document discusses the interaction design process, emphasizing the importance of understanding user experiences and conceptualizing designs before coding. It highlights the need for a collaborative design team to explore various interaction styles and technologies to enhance usability while considering safety and user distraction. The text also covers the significance of conceptual models and metaphors in designing intuitive interfaces, illustrating how these elements can improve user engagement and satisfaction.

Uploaded by

vaibhavrai4141
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views48 pages

3-Conceptualizing Interaction - Conceptual Model-01!08!2024

The document discusses the interaction design process, emphasizing the importance of understanding user experiences and conceptualizing designs before coding. It highlights the need for a collaborative design team to explore various interaction styles and technologies to enhance usability while considering safety and user distraction. The text also covers the significance of conceptual models and metaphors in designing intuitive interfaces, illustrating how these elements can improve user engagement and satisfaction.

Uploaded by

vaibhavrai4141
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Interaction Design Process and

Conceptualizing Interaction
Dr K Ganesan
Professor – Higher Academic Grade
School of Computer Science Engineering and
Information Systems
VIT University, Vellore – 632014
kganesan@vit.ac.in
Phone : 6382203768
• Introduction
• Assume you are asked to design an app to share
photos, movies, music, chats, documents etc in safe
& enjoyable way
• Would you sketch the entire interface, work out
system architecture to be drawn or start coding?
• Would you ask users about current experiences of
sharing files and look at existing tools (Dropbox)
and think about why and how to design the app?
• Having a clear understanding of why and how we
are going to design something, before coding, can
save enormous amounts of time, effort & money.
• If ideas are coded, it is harder to throw them away.
Understanding Problem Space and Conceptualizing Design
• We work out how to design the physical interface and
what technologies and interaction styles to use, e.g,
whether to use multi touch, speech, GUI, head-up
display, AR, gesture-based etc.
• Thus usability and user experience goals will be achieved
• For e.g, consider the design of an integrated in-car
entertainment system consisting of phone and
navigation system that allows driver to follow
directions, find nearby eating places, watch TV and
read their email.
• They will help drivers to get live sports news, find
where is the cozy coffee shop in the next town.
• But, How distracting is that?
• Imagine using projection technology to display the
info throughout the vehicle, on dashboard, rear-
view mirror and windshield.

Left one uses a GPS cum TV system used in Korea. Right one
is a digital info about vehicle’s state & driver’s navigation
plans, projected on the windshield. A voice browsing system
allows driver to control interactions while driving. Is it safe?
• It is dangerous – can distract drivers, encourage them
to switch their attention from road to images
projected
• Understand and conceptualize what is currently the
user experience / product and how this is going to be
improved or changed.
• For this we need a design team that work out how
their ideas will support or extend the way people
communicate and interact in their activities.
• In the above case, drivers may be trying to read maps
while moving steering wheel or looking at a small GPS
display on dashboard when approaching a
roundabout and our problem is to ensure that drivers
can continue to drive safely without distraction.
• Be considerate on assumptions and claims.
• Assumption means taking something for granted
when it needs further investigation.
• E.g. people will want to watch TV while driving.
• Claim means stating something to be true when it
is still open to question, e.g, a multimodal style of
interaction for controlling a car navigation system
one that involves speaking while driving – is safe.
• In many projects, workout how they can be
improved using different set of functions.
• Articulating problem space is done as a team effort.
• Team members will have differing perspectives on
the problem space.
• For e.g, a project manager may be concerned about
budgets, timelines and staffing costs but a software
engineer will think of breaking it down into specific
technical concepts.
• Spending time enumerating and reflecting upon ideas during
early stages of the design process enables more options and
possibilities to be considered.
• (Japanese spend 80% of time for planning, and 20% of time
on Execution – In India, it is the reverse – Hence failures)
• Let a large software company wanted to develop an upgrade
of its web browser for smart phones as its marketing team
said that many customers switched to a new mobile browser.
• Marketing team felt that there is something wrong in their
browser and rivals have a better product.
• The design team assumes that they have to improve the
usability of their browser’s functions and make interface
simpler, attractive and flexible to use.
• Researchers found - many customers have not used bookmarking tool
• One user said that web browser’s function for organizing bookmarks
are fiddly & error-prone when using a multi-touch screen.
• Other one said moving bookmarks between folders was awkward.
• Some that it was difficult and time consuming when trying to move
bookmarks between folders.
• A software engineer claims that bookmark is no longer needed as
people revisit a website by referring to the history list of previously
visited pages.
• Some told that users do not like to leave a trail of the sites visited.
• One told to include most visited sites as thumbnail images / tabs.
• An engineer told this might clutter the small screen interface.
• Finally everyone agreed that web browser’s structure is too rigid
and create a simpler way of revisiting websites on the smartphone.
• 3D TV – Use Case
• 3D TV went on sale in 2010. There was hype about its enhanced
user experience while watching movies, sports and dramas.
• Assumption was that people will not mind wearing 3D glasses
also they will pay more for a 3D enabled TV.
• Claim was that people would enjoy enhanced clarity, color,
depth of 3D TV based on feedback from 3D films (say, Avatar)
• Is enhanced cinema experience is a desired on for living room?
• Will people like wearing special glasses in their living room?
• Will people carry it wherever they go (visiting family/friends)?
• Is all the images will be better when viewed in 3D?
• It is a joy when our team wins and painful when opponents win.
• Can I read newspaper while eating & read email on the laptop
while watching the 3D TV.
• Is it not switching from 2D display to 3D is unpleasant.
• Conclusion
• Are there problems with existing product / user experience.
• If so, what are they? How proposed design ideas can
overcome them? (Face recognition based attendance)
• If new design is proposed ensure how it will support, change
or extend current ways of doing things (Avoiding Proxies)
• Having a good understanding of the problem space helps
design teams to conceptualize the design.
• Benefits are: Orientation – Design team can ask questions
about how conceptual model are understood by end users.
• Open-minded ness – Avoid narrow focus of design team.
• Common ground – Design team can establish a set of
common terms that all can understand and agree, reduce
misunderstanding and confusion and the agreed conceptual
model can become a shared blue print.
• This can be represented as text and / or in a diagram form.
• Conceptual Models
• “A high level description of how a system is organized &
operates” and it outlines what people can do with a product
and what concepts are needed to understand how to
interact with it.
• Conceptual models will help designers, say, what is the best
way to sort and revisit saved pages and how many and what
types of containers should be used (folders, bars, panes).
• It can be repeated for other functions of web browser.
• The operations they support should be intuitive to use.
• For e.g OS and word processors allow user to carry out the
same activity in many ways.
• E.g: ?
• Users have to learn each of the different styles to decide
which they prefer.
• Users get annoyed when they find a simple way of doing
some thing is changed.
• Most interface applications are based on established
conceptual models.
• For e.g, while designing an e-commerce site, the
placement of items a customer wishes to purchase
into a shopping cart and proceeding to check out
when ready to purchase are standardized.
• Collections of patterns are readily available to design
the interface for these core processes (patterns for
online forms and event calendars).
• It is rare to see a completely new conceptual model
that transform the way we do our everyday activities.
• Design concept – is a set of ideas for a design.
• It comprises of scenarios, image, mood boards or text
based documents.
• For e.g the design concept for ambient display
aimed at changing people’s behaviour in a
building. An animated pattern of twinkly lights
embedded on the carpet near the entrance of the
building to lure the people towards the stairs.
• New conceptual models emerge to change our
everyday work activities in a computer.
• For e.g desktop, spread sheet and web.
• These products have made what was previously limited
to a few skilled people to everyone to use now.
• Desktop changed how office tasks can be done
(creating, editing, printing of documents).
• Spread sheet made accounting to be flexible and easier
to do many computations by filling interactive boxes.
• Web allowed anyone to remotely browse a network of
information.
• Kindle and iPad, have introduced novel ways of
interacting with digital information.
• Classic Conceptual model : Star (1981)
• Developed by Xerox changed the design of personal
computing and it targeted at workers not interested in
computing & it used known knowledge of an office.
• Paper, folders, filing cabinets, mailboxes were
represented as icons and were designed to process
some of the properties of their physical counterparts.
• Dragging a document icon across the desktop screen
was seen as equivalent to picking up a piece of paper
in the physical world and moving it.
• Dragging an electronic document on to an electronic
folder was seen as being similar to placing a physical
document into a physical cabinet.
• It also introduced some operations that could not be
done in physical world. For e.g electronic files can be
place on to an icon of a printer for printing them out.
• Interface metaphors
• Metaphors provide a structure that have their own
behaviours and properties. For e.g desktop metaphor.
• Another one is search engine. It refers to a software
tool that indexed and retrieved files remotely from
Internet, using various algorithms to match terms
entered by the user.
• It includes listing and prioritizing the results of a search.
• It does these actions in different ways from how a
mechanical engine works or how a human might search
a library for books on a given topic.
• In e-commerce sites, the ‘add to shopping cart / trolley /
basket’ is followed by the ‘checkout’ metaphor.
• Here placing an item in the basket does not commit the
customer to buy it there and then. It enables them to
browse further and select other items – like in a physical
store.
At times interface metaphors contravene people’s expectations.
• The recycle bin sits on the desktop.
• Logically & culturally, it should be placed under desk.
• Designers can fall into the trap of trying to create a virtual
object to resemble a physical object that is badly designed.
• For e.g, virtual calculator is designed to look and behave like
a physical calculator.
• They may use excessive use of modes, poor labelling of
functions and difficult to manipulate key sequences.
• In some, user has to use shift keys (eg, deg, oct and hex)
instead of redesigning as dedicated software buttons.
• Better approach is to think about what kinds of calculations
people typically want to do when using their phones or
computers. Simple one is Mac Calculator.
• Material metaphors
• Recent interface metaphor is the card. Social media apps -
Facebook, Twitter, Pinterest - present their content on cards.
• Cards have a familiar form factor and many kinds of them
are there: playing cards, business cards, birthday cards,
credit cards, driving cards, postcards, red cards.
• They can be easily flicked through, sorted, and themed.
• Google uses the metaphor of the surface of paper - a new
kind of UI in its devices - smart watches, phones, tablets -
• So, for example, their Google Now Card (that provides short
snippets of useful information) appears on and moves across
a smartphone screen in the way people would expect a real
card to do – in a lightweight, paper-based sort of way.
• Why are metaphors popular?
• People use metaphors and analogies as inspiration
for understanding and explaining others what they
are doing or trying to do, in terms that are familiar
to them.
• Teachers introduce something new to students by
comparing the new material with something they
already understand.
• For e.g automotive applications and biomedical
application.
• Pressure mats used for releasing relevant airbags Vs
Quantifying the amount of dead cells in the foot of
a diabetic patient.
• Interaction Types
• Many ways a person interacts with a production / application:
• instructing,
• conversing,
• manipulating and
• exploring.
• Decide which one to use and why?
• E.g, speech based, gesture based, touch based, menu based etc.
Cost and other constraints will dictate which interface style to
use for an application.
• To design a computer system for autistic children to communicate
and express themselves better.
• These children do not express what they feel or think through
talking & are more expressive when using their bodies & limbs.
• So talking based interaction style is not effective. But interaction
with the system via physical and/or digital space is a better one.
• Instructing: Users give instruction to a system.
• They use: typing of commands, selection options from
menus, speaking of commands, gesturing, pressing buttons
or using a combination of function keys.
• Conversing: Users have a dialog with the system.
• They speak via an interface or type the questions to which
system gives a text or speech output.
• Manipulating: Users interact with objects in a virtual or
physical space by manipulating them (opening, holding,
closing, placing).
• Exploring: Users move through a virtual/physical space.
• Virtual environments include 3D worlds, augmented and
virtual reality systems.
• Physical spaces use sensor technologies – smart rooms and
ambient environments (automatic opening and closing of
doors, lights, controlling fan speed, temperature of AC units)
• Describe specific domain and context based activities that
users engage in, say, learning, working, socializing, playing,
browsing, writing, problem solving, decision making and
information searching.
• Instructing: It describes how users carry out their tasks by
telling the system what to do. E.g ask a system to tell the
time, print the file and remind user of an appointment.
• Many home entertainment systems, consumer electronics
and computers use this. Here one can press buttons or type
strings of characters (timers in grinding & washing machines
• OS like Unix and Linux primarily are command based system
but Windows and other GUI based system, use control keys
or selection of menu options via mouse or touch screen.
• For e.g a user writing a report using word processor will
want to format, count words and check spelling. Users can
give necessary commands
• Figure below shows pictures of 2 different vending machines, one that
provides soft drinks and other a range of snacks. Both use an instructional
mode of interaction. But the way they do is quite different.
• What instructions
• must be issued to
• obtain a soda from
• the first machine
• &a bar of chocolate
• from the second?
• Why has it been
• necessary to design
• more complex mode
• of interaction for the
• second vending
• machine?
• What problems can
• arise with this mode
• of inter action?
• In first vending machine user has to choose from a small
number of drinks and hence each is represented by large
buttons. User has to press one button.
• Second machine is complex as it gives wide range of snacks.
• One has to read the code (e.g C12) under each item chosen,
then key it in a number pad next to the displayed items and
check the price of selection option and ensure the amount
of money inserted is same or greater.
• But customer may misread the code or mistype the code
that will lead to wrong delivery.
• Better way is to use buttons that show miniature versions of
snacks placed in a large matrix based on the space available.
• Then customer can only press the button of the object
chosen and put in correct amount of money.
• But if a new product comes out, they need to replace part of
the physical interface of the machine that would be costly.
• Conversing
• Here a person converses with a system - acts as
dialog partner
• Has a 2 way communication, not the machine
obeying orders
• It is used in apps where user wants a specific kinds
of info or wants to discuss issues.
• E.g advisory systems, help facilities and search
engines.
• We use voice recognition or menu driven systems
interfaced with phones.
• E.g, in Google we tell what we want – dial
Ramanathan. It looks at the contact list and finds
Ramanathan and dials the same.
• Advanced systems use natural langue based systems that
can parse and respond to queries typed by the user.
• E.g users type a specific query –’How do I change the
margin widths?’ – System gives various answers.
• For e.g, Apple's speech system, Siri, lets us talk to it as if it
were another person.
• We can ask it to do tasks for us, such as make a phone call,
schedule a meeting, or send a message.
• We can ask it indirect questions such as , “Do I need an
umbrella today?”
• It will look up the weather for where we are and answer
with something like, “I don't believe it is raining” while
also providing a weather forecast.
• Problem while using a conversational-based
interaction type is that certain kinds of tasks are
transformed into cumbersome and one sided
interactions.
• This is true for automated phone-based systems
that use auditory menus to advance the interaction.
• Users have to listen to a voice providing several
options, then make a selection, and repeat through
further layers of menus before accomplishing their
goal, e.g. reaching a real human or paying a bill.
• For e.g, a dialog between a user who wants to find
out about car insurance and an insurance
company's reception system:
• <user dials an insurance company>
• ‘Welcome to Bajaj Insurance Company. Press 1 if you
are a new customer; 2 if you are an existing customer.’
• <user presses 1>
• ‘Thank you for calling Bajaj
• Insurance Company. If you
• require house insurance
• press 1, car insurance press
• 2, travel insurance press 3,
• health insurance press 4,
• other press 5.’
• <user presses 2>
• ‘You have reached the car insurance division. If you
require information about fully comprehensive
insurance press 1, third-party insurance press 2 . . .’
• Manipulating
• It involves manipulating objects. For e.g, digital objects can
be manipulated by moving, selecting, opening and closing.
• Extensions actions are zooming in and out, stretching and
shrinking – actions that are not possible with objects in real
world.
• Human activities can be imitated via physical controllers or
gestures (Kinect) to control movements of an on-screen
avatar.
• Physical toys and robots are embedded with computation and
capability that enables them to act and react in
programmable ways depending on whether they are
squeezed, touched, sensed or moved.
• Tagged physical objects (balls, bricks, blocks) are manipulated
in a physical world (e.g placed on a surface) can result in
other physical and digital events occurring, say, lever moving
or a sound or animation being played.
• Benefits of direct manipulation :
• Helping beginners learn basic functionality rapidly;
• Enabling experience users to work rapidly on many
tasks;
• Allowing rare users to remember how to do
operations over time;
• Preventing need for error messages;
• Showing users how their actions are furthering their
goals;
• Reducing user’s experiences of anxiety;
• Helping users gain confidence and mastery and feel
in control.
• Apple started to design an operating environment
that used direct manipulation of objects.
• Mac desktops and iPad displays used them.
• Many apps have been built including word
processors, video games, learning and image
editing tools used direct manipulation.
• Trainer kits – collect the data from A, B, C sensors
and analyze how good you are driving and provide
scenarios accordingly.
• In Online learning, depending on time spent to
read a material and questions answered, the
generative AI will give additional materials.
• Drawbacks of Manipulation
• Not all tasks can be described by objects and not all actions
can be done directly. Some tasks can be done better with
commands.
• Assume that in a word processor, you misspelled the name
of Moorthy as Murthy in the entire document.
• In a direct manipulation interface, we need to find each
instance of Murthy and manually select the ‘u’ in every
Murthy and delete ‘u’ and insert ‘oo’.
• This is tedious and we may miss one or two.
• If we use command based interaction, we can instruct word
processor to find every ‘Murthy’, replace it with ‘Moorthy’.
• This is done by selecting a
• menu option or using a
• combination of command
• keys and by typing changes
• needed in pop-up dialog box.
• Exploring
• It involves users moving through virtual or physical
environments. For e.g users can use a virtual 3D
environment, say, interior of a building.
• Physical environments can be embedded with sensors, that,
when they detect the presence of someone or certain body
movements, respond by triggering certain digital or physical
events.
• (Automatic A/C temp control by count sensor in Halls and
adjusting A/C temp as per human movement in beds)
• Best examples is Second Life.
• Many virtual landscapes describing cities, parks, building,
rooms and dataset have been built, that enable users to fly
over them and zoom in and out of different parts.
• A number of physical environments can be developed using
embedded sensor and location detection technologies.
• They are called context-aware environments.
• The location and/or presence of people in the vicinity
of a sensing device is detected and based on it, one
can provide digital information on user’s device
(history / museum) or which action to perform (e.g
changing lights in a room) that is relevant to the
person at that time and place.
• Location based virtual guides in cell phones, provide
useful info about restaurants, historical buildings, etc
as the person wanders near them.
• They help children to learn.
• A physical woodland can be wired to provide digital
info to children as they move around.
• Depending on which part of the wood land they stand
by (a particular tree, bush or hole) an image will pop
up or sound will be played.
Smart Bin Tracking System
• Paradigms, Theories, Models and Frameworks
• A paradigm is a general approach adopted by
community of researchers or designers for carrying
out their work, in terms of shared assumptions,
concepts, values & practices.
• (CLI, Natural Language, AR, VR and Tangible user
interface – Magic Wall)
• A theory is a well substantiated explanation of some
aspect of a phenomenon.
• Affordances: Designing elements that clearly
communicate their possible uses. Example: ?
• Visibility: Making system functions & options
discoverable. Example: ?
• Feedback: Providing clear and timely information
about system responses. Example:?
• A model is a simplification of some aspect of HCI
intended to make it easier for designers to predict
and evaluate alternative designs.
• Interaction Models:
• Direct Manipulation: Users interact with objects on
the screen directly. Example: ?
• Menu-Based Interaction: Users select options from
presented menus. Example: ?
• Form-Based Interaction: Users fill out forms with
specific data. Example: ?
• A framework is a set of interrelated concepts
and/or a set of specific questions that is intended
to inform a particular domain area (collaborative
learning), online communities or an analytic model
• Development Frameworks
• React: A JavaScript library for building user
interfaces, React provides a component-based
architecture. It’s widely used for creating web and
mobile applications.
• Flutter: A cross-platform framework for building
native-like apps for iOS, Android, web, and desktop
from a single codebase. It offers a rich set of pre-
built widgets and tools for fast development.
• Paradigms
• Following a paradigm means adopting a set of
practices that a community has agreed.
• These include: questions to be asked and how they
should be framed; the phenomena to be observed;
• the way in which findings from studies are to be
analysed and interpreted.
• In 1980s, in HCI, paradigm was how to design a user
centred apps for the desktop computer.
• Questions about what how to design were framed in
terms of specifying the requirements of a single user
interacting with a screen based interface.
• WIMP was used for designing the core features –
Windows, Icons, Menus and Pointer.
• This was later superseded by GUI.
• In 1991, Ubiquitous paradigm emerged.
• Here computers have become part of the
environment, embedded in a variety of everyday
objects, devices and displays.
• Here people were informed of what is happening
around them, what is going to happen and what has
happened.
• Here devices would enter a person’s centre of
attention when needed and move to the periphery of
their attention when not, enabling the person to
switch calmly and effortlessly between activities
without having to figure out how to use a computer
when doing their tasks.
• It was unobtrusive and disappear in to the
background.
• In late 1990s, paradigm was to embed and augment the
environment with various computational resources to
provide info and services, when and where desired.
• An assortment of sensors have been embedded in our
homes, hospitals, public buildings, physical environments
and even our bodies to detect trends and anomalies,
providing data about our health and movements and
changes in the environment (insulin pumps and continuous
glucose monitors).
• Here sensed data are used to automate mundane
operations and action that need to be done.
• The challenge in this paradigm is that how to ensure that info
pass via interconnected devices and objects is secure and
trustworthy.
• (Electronic Belt for Waist Measurement – 5582/CHE/2013
Half length rope – alert user about belly size increase & belt)
• Who is in Control?
• A recurrent theme in interaction design is who should
be in control at the interface.
• Different interaction types vary in terms of how much
control a user has and how much the computer has.
• Whereas users are primarily in control for command-
based and direct manipulation interfaces, they are
less in sensor-based and context-aware environments,
like the smart home.
• User-controlled interaction is based on the premise
that people enjoy mastery and being in control.
• It assumes people like to know what is going on, be
involved in the action, and have a sense of power over
the computer.
• In contrast, context-aware control assumes that having the
environment monitor, recognize, and detect deviations in a
person's behavior can enable timely, helpful, and even critical
information to be provided when considered appropriate.
• For example, elderly people's movements can be detected in
the home and emergency or care services alerted if
something untoward happens to them that might otherwise
go unnoticed: for instance, if they fell over and broke a leg
and were unable to get to a telephone.
• But what happens if a person chooses to take a rest in an
unexpected area (on carpet), that system detects as a fall?
• Emergency services are called unnecessarily!
• Will the person who triggered alarm be mortified at triggering
a false alarm?
• And how will it affect their sense of privacy, knowing their
every move is constantly being monitored?
• Another concern is what happens when the locus of control
switches between user and system.
• Who is in control when using a GPS for vehicle navigation.
• At the beginning the driver is very much in control, issuing
instructions to the system as to where to go and what to
include, e.g. highways, gas stations, traffic alerts.
• However, once on road, system takes over and is in control.
• People slavishly following what the GPS tells them to do,
even though common sense suggests otherwise.
• To what extent do we need to be in control in our everyday
and working life?
• Are we happy to let computers monitor and decide what we
need or do we prefer to tell it what we want to do?
• How would we feel if our car tells us to drive slowly because
it has started to rain?
• How do design a paper based calendar Vs digital calendar?
• Theories
• Theories are used in HCI for analysing and predicting performance
of users doing tasks for specific computer interfaces and systems.
• For e.g cognitive theories about human memory was finding the best
ways of representing operations, given people’s memory limitations.
• These theories can find factors such as cognitive, social and affective
that are relevant to design and evaluation of interactive products.
• Models - Models can be applied to interaction design.
• 7 stages of action model that describes how users move from their
plans to executing physical actions they need to perform to achieve
them, to evaluating outcome of their actions w.r.to their goals.
• Another model by researchers & designers was analyzing user
performance for different interfaces to find which is effective one.
• In recent times, user models are used to predict what information
users want in their interactions.
• Frameworks
• Frameworks are used in interaction design to
constrain and scope the user experience for which
they are designed.
• Unlike model – which is a simplification of a
phenomenon – a framework offers advice to designers
so as to what to design or look for.
• This can come in many forms, including steps,
questions, concepts, challenges, principles, tactics and
dimensions.
• There are frameworks for helping designers think
about how to conceptualize learning, working,
socializing, fun, emotion, etc.
• Norman’s framework comprises of 3 interacting
components: Designer, User and System
• Designer’s model: the model the designer has of how
the system should work.
• System image: How the system works is shown to the
user through the interface, manuals, help facilities, etc.
• User’s model: How the user understands how the
system works.
• Garrett’s recent framework comprises 5 planes.
• These are Surface Plane (top one), Skeleton Plane,
Structure Plane, Scope Plane and Strategy Plane
(bottom one).
• Here each plane is dependent on the planes below it.
• Such dependencies reflect a ripple effect where
decisions made early on affect those further up the
planes.
• It is used for web development and understand the
elements of user experience.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy