The right question is not "where can we add AI". It is "where would a customer be glad we did, and how do we earn the trust to do more".
AI in retail banking has had three lives. The first was rules-and-classifiers, used silently for fraud, AML, and credit scoring. The second was conversational, mostly chatbots that promised more than they delivered. The third, ongoing now, is generative and assisted, where models can summarise, explain, propose, and complete drafts inside the workflow. The third life is the one redrawing job descriptions in the bank, not just inside design but across operations, risk, and product.
For a senior designer at ENBD, the working assumption should be that AI is already in the stack and the question is how to surface it well. The temptation to put a glowing button in a corner labelled "Ask AI" is real and almost always wrong. The right placements are quieter: an explanation that appears beside a transaction the customer does not recognise, a summary at the top of a long statement, a draft of a dispute that the customer can edit, an automatic categorisation of spend that can be corrected, a search that understands "the rent transfer last month" without forcing the customer to learn a query language.
Where AI earns its place
Three families of use cases tend to deliver real value in retail banking. The first is comprehension assistance: summarising statements, explaining transactions, surfacing answers buried in the fine print. The second is journey acceleration: pre-filling forms, drafting messages, suggesting recipients, flagging duplicates. The third is risk and safety: detecting unusual transfers, scoring incoming requests for fraud, monitoring sessions for compromise. Each has a UX shape. Each fails differently.
The states that matter
AI features have a wider set of UX states than most others. A button is pressed or not, but a model can be confident, partially confident, working in the background, missing data, or simply wrong. Designing those states is most of the job.
AI UX states for banking1 / 5
01 / 05
In progress
Thinking
Tap to flip
01 / 05
The model is working. Show that work without theatre. Indicate progress when the operation is more than two seconds. Avoid imitating human typing if the underlying process is not actually streaming, and avoid reassuring copy that distracts from honest waiting.
02 / 05
Calibrated
Confidence
Tap to flip
02 / 05
The model is sure. Display the answer cleanly with the source it relied on, an option to disagree, and a way to ask for more depth. High confidence is not a licence to hide the original data; the customer should always be able to see the underlying transaction or statement line.
03 / 05
Drafting
Partial output
Tap to flip
03 / 05
The model has produced a draft and is asking the customer to verify or extend it. Use the tone of "here is a starting point" rather than "here is the answer". Make editing the default action, not an afterthought tucked behind a menu.
04 / 05
Honest miss
Failure
Tap to flip
04 / 05
The model could not produce a useful answer, or produced one that should not be shown. Say so plainly. Offer a manual path. Never invent a fallback that mimics confidence: silent failure is the easiest way to lose trust permanently.
05 / 05
Repair
Correction
Tap to flip
05 / 05
The customer disagrees and corrects the model. Treat the correction as data: acknowledge the change, persist it, and let it shape future answers in the same context. Without a correction loop, the rest of the AI interface is theatre.
Thinking
Honest in-progress communication.
Confidence
Surface the answer with sources, never hide the data.
Partial output
Draft framing where editing is the default.
Failure
Plain admission and a manual path.
Correction
Acknowledge, persist, learn.
The trust ladder
Customers do not extend AI a uniform amount of trust on day one. They extend it incrementally as the system proves itself. A useful mental model is a ladder of permissions. At the bottom, AI summarises information the customer can see anyway. One step up, AI categorises spend or detects duplicates, with the customer correcting as needed. Higher still, AI drafts messages and pre-fills forms, with the customer reviewing before submission. Higher again, AI takes small actions on the customer's behalf within tight rails (move spare change to savings, decline a known scam transaction). Only at the top does AI act with significant agency and money at stake, and that step should be reached deliberately, with explicit consent and a clear off-switch.
Bilingual AI is harder than monolingual AI
The UAE customer base is bilingual at minimum, often trilingual. AI features that work in English and not in Arabic are unfinished by definition. Beyond translation quality, there is the harder problem of cultural and tonal calibration. Arabic registers vary across dialects and contexts; a model that treats them as a single language loses its voice. The right approach is to test AI features in Arabic and English from the first sprint, with native speakers, in real customer scenarios, on real devices.
Compliance is your design partner
An AI feature in the bank touches data, model, and decision risk. The right partners are not lawyers blocking your work; they are model risk colleagues, fraud, legal, and Sharia (where applicable) co-designing the safe envelope. Bring drafts early. Document where the data goes, what the model can and cannot say, what is logged, and what the customer can see. Done well, this becomes a public-facing trust statement. Done poorly, it becomes a leak waiting to happen.
Reflections
Pick a screen in the current ENBD app you would happily delete. Now design the AI-assisted version that replaces it. What did you preserve and what did you let go?
Write the failure copy for the worst plausible AI mistake in your feature. Read it aloud. Would the customer accept it?
Where on the trust ladder would you place a "draft a dispute" feature, and what would have to be true to climb one rung higher?