Boomi EMEA CTO Ann Maya on why AI trust takes time
From satellite navigation to AI agents, Maya argues that humans tend to absorb new technology gradually — and firms need to build their data foundations
With a psychology degree and two decades in enterprise IT, Boomi EMEA CTO Ann Maya has an unusual vantage point on AI. Over coffee with TechInformed, she explains how she sees AI less as a rupture than a continuum — the latest in a long line of technologies that humans have quietly absorbed and evolved beyond.
Her advice to CISOs and IT leaders is less about which AI tools to buy than what to fix first. And her answer, arrived at through psychology, satellite navigation software and nearly seven years at integration platform Boomi, is consistent: the data, always the data.
How did your career evolve to becoming CTO?
I went to the University of Washington in my home state. I originally pursued an engineering degree, so I did a lot of calculus, physics and math. Three years into that, I realized I was tired of being surrounded by 197 guys and maybe three girls in these big lecture halls, and spending a lot of time in these dark physics basements. I remember thinking: what are those people doing out there running around in the sun? They were the arts and sciences students. So I thought maybe that was more my vibe, and I switched my credits to psychology and worked in the psychology department.
I was fascinated by how memory is stored and consumed, and the neural basis of behavior. I’ve always been curious about how things are made and how they become something practical, so it’s actually not that far off from engineering.
After that, I took a break and went to Japan to teach English for a few years. I met my husband there and eventually ended up in Ireland. While I was there I thought, “Okay, time to look for a real job.” I realized I wanted to get into IT, so I went back to school and got a certification in IT and started programming.
Do you ever draw on your psychology knowledge in your current role?
If you think about how the human brain captures and stores memory, there are some similarities with how AI functions today — although there are also important differences. As AI progresses, we’ll see more clearly that it isn’t like the human brain, even though the parallels are fascinating.
It has helped me think about processing. My first job in IT was as a programmer for a satellite navigation system. We processed backend data blocks — there was no interface. Other companies provided the interface; we sold the data blocks. We worked in a Unix environment because you could compress simple text really efficiently. This was before Google Maps, so we’re going quite far back.
I recently showed a slide in a presentation asking: “Do you remember folded maps?” When I looked around the Zoom call, I realized that half the people had probably never used a paper map.
We went from paper maps to satellite navigation devices, where you had to download data into a unit and put it in your car. And now we have Google Maps on our phones. That’s been a huge progression in a relatively short time.
Interestingly, there wasn’t a big “wow” moment when paper maps disappeared. We just quietly adopted the new technology.
AI is a bit similar. Right now, we’re at the GPS stage of AI. Very quickly we’ll get to the point where it’s just something you walk around with, part of everyday life.
Psychologically that’s interesting: how people accept new technology, how they store information, retrieve it and how trust builds over time. Trust is a huge part of AI adoption. Humans provide the trust layer. For example, we trust satellite navigation systems now. But if we had jumped straight from paper maps to those systems, people probably wouldn’t have trusted them. It was the gradual adoption of different technologies that got us here.
Do you think people are starting to trust AI agents more now?
At the individual level, we already trust AI quite a lot. Tools like ChatGPT, Perplexity and Gemini help with personal productivity. People feel they’re getting value from them.
But at the organizational level, companies haven’t always seen the same return on investment yet. That’s because it’s been harder to translate those individual capabilities into business workflows. People also worry that if AI is too good, it might replace their job. So adoption happens gradually.
The key is making people’s lives easier. For example, if someone works in customer success and pulls information from multiple systems, they might have to combine data from different spreadsheets. If those systems aren’t connected properly, you don’t get the full picture. That makes it harder to trust AI outputs. But if integration platforms like Boomi connect that data automatically and securely, you can trust the results more and actually get real organizational value.
Do you think enterprises are ready for AI agents?
It’s happening now. We’re seeing customers move beyond pilot projects and put these systems into production.
Once agents start delivering value in someone’s day-to-day job, you move beyond optimizing individual tasks and begin improving entire workflows. Boomi calls this “agentic transformation.” The agent becomes part of the workplace, not just a helpful assistant.
For example, instead of one agent helping a customer success manager, you might have multiple agents working across a team to detect signs of customer churn — things like rising support tickets. Eventually you can orchestrate entire workflows. Later you can even look for new opportunities, such as identifying new market segments automatically.
What are organizations still getting to grips with on AI?
Many organizations are still trying to build the foundations needed for AI. To use AI properly, you need integrated data. If your systems aren’t connected, you end up with silos, and that’s a huge problem. With AI it’s even worse, because you’re trusting incomplete data.
So organizations need strong data integration, good pipelines and workflows and proper observability. They also need security measures in place so they can detect and respond quickly to cyberattacks. If you get those foundations right, then you can really start aiming for the bigger opportunities with AI.
How do you personally use AI in your day-to-day work?
I like using different models because each has different strengths. For deep research I use Perplexity. For general insights I might use Gemini. For more conversational tasks I use ChatGPT.
If someone hasn’t used AI much and feels unsure about it, where should they start?
In a work environment, the first step is checking which tools your organization allows. Then think about the tasks you dislike. For example, I don’t like taking notes, so I’ll turn on Gemini during meetings to handle that. You can also use AI to draft emails, review writing or start a document.
Basically, ask yourself: what would you do if you had the best assistant in the world? Personally, I’m still waiting for AI to do all my expense reports. That’s the one task I’d love to hand over completely.
Do you ever switch off from technology?
I do, but I also find this a fascinating time. I’m going snowboarding next week, so that will definitely be switching off. But even then I think about how technology could help. For example, imagine AI connected to ski goggles that could guide you down the mountain or help you find your friends at a ski bar. It could also provide safety alerts, like avalanche warnings. So yes, I switch off, but I’m always curious about what technology could do next.
How do you have your coffee?
A black Americano. But it has to be a really good-quality bean. I recently invested in a very expensive coffee maker, so the beans have to be freshly roasted.