This week I went to a Google Design Breakfast ‘DESIGNING FOR TRUST IN AN AI-FIRST WORLD’ - a thought proving session which the panel talked around best practices designing for inclusive, human-centred AI. Here are my key take-outs from the session:
LONG TERM TRUST RELATIONSHIPS
Tom Taylor, Co-Op: Customer and company/brand relationships can be ‘For the whole of your life - building trust to do things with you and for you’. That relationship is much longer than you’d expect, trust has to be earnt with such timescales, can be easily broken.
- This is earned trust vs engendered 'trustiness' (which is the cheap way of getting trust) - companies putting symbols of trust all over your system but the underlying system/principles are unsafe.
- When a brand gets it wrong (e.g. Talk-talk data breach, Ryan air pilot holiday crisis) - if your brands fail is picked up by the media, the damage will have much more effect. Trust can be irreversible and can bring down the company.
EASY INTERFACES, EASY LOSS OF CONTROL
Sarah Gold, IF Design Studio: How much access to your data have you given away when are you are trading off of transparency over ease within an interface/experience.
- ’Making an interface easy doesn't make it easy to understand’ - Matt Jones from Google said we ‘wilfully push away complexity’ and that can mean we give away much more privacy than we realise.
- We almost need a ‘trust mark’ for tech, a symbol for safety. This is happening with the password strength green bar - a piece of interaction which gives you trust in the system, it’s the tip of public key structure trust (it used to be done with padlocks).
- Public messaging at point of need - tell people things they need to know when they need it.
TO BE HUMAN…
- In the future, how much is going to be auto written (future emails/comms etc)? How much of ‘ourselves’ are being compromised from a human level - does it matter that your receiving comms from AI bots vs. human? How will this effect how you relate to companies, people, information?
Priya Prakash founder of D4SC, thinks there’s a lack of imagination with bot content and design. She’s sick of fast feedback - asking how can we make a lazy AI - an AI which would speak like us - adding urms and ahhs - which would make them more human. Question is, do we want them to be more human. How will we tell the difference. Maybe we should have a way of interacting and speaking with bots and a different way to speak with humans?
Rachel Coldicutt, CEO of Doteveryone said AI is pointing towards a normalisation of behaviour - it teaches you the so called 'right' way of saying something, e.g. Ask Alexa for something with a stutter and Alexa doesn’t respond. It validates doing certain things over other things, and as a stutter sufferer she is concerned that anyone who is ‘different’ for any reason will be seen as irregular to the norms of the system. Who controls what is deemed as normal? The people who create the system may have unconsciously created it with their own preconceptions and bias.
- Will language and manners be dictated by what you learn from system feedback - e.g. if you insist on hearing ‘please’ when asking a bot for something, does that improve manners in society and vice-versa with low manner systems?
Q. WHAT ARE THE BIGGEST CHALLENGE AS DESIGNERS IN DESIGNING FOR TRUST?
- Showing a different way things can be, making and testing alternatives to a typical big business agenda
- Share research openly, learn from each other
- Be clear about the test data - who created it (was there any bias), who has access to it (data protection), how much of your design is being effected by it
- Designers need to have a strong opinion at a policy/industry level - do we need Ethical specialists to keep standards on ethics?