Generative AI has become a staple of the workplace. According to a 2024 Forbes study, 79% of respondents reported using services like ChatGPT at work. As AI becomes more accessible and widely used, leading experts like Kay Firth Butterfield—TIME 100 awardee and 2025 top AI speaker—are using their platforms to aid this transition. We sat down with Kay to discover the benefits and ethical risks of technologies such as generative AI.
Q: What advice do you have for businesses that want to start using artificial intelligence?
Kay Firth-Butterfield: That’s a huge question because there are so many ways AI can be used in a business. But I’d say to use it successfully, you need to be very aware of the responsible or trustworthy aspects of artificial intelligence.
You shouldn’t deploy AI—or, if you’re creating it yourself, design and develop it—without keeping ethics in mind. We call it responsible or trustworthy AI now because those factors might affect successful deployment if you don’t get it right.
There’s the possibility of serious damage to a company, not just brand or customer loss, but financial loss too. More and more regulators are starting to sue those using AI irresponsibly or without trust built in. No one wants to be seen as untrustworthy with AI; it’s not a good look.
Where to deploy AI? Some common uses are in human resources, to help with talent spotting. But there are big problems with using AI in HR because it can bring in human biases. We’re seeing some lawsuits in the US where companies unwisely bought AI for HR and are being sued for using discriminatory tech. You have to be really careful; it’s a balance between AI’s benefits and thoroughly thinking through buying and deploying these systems.
Other AI business uses? Manufacturing companies across factory floors, or drug companies to help design pharmaceuticals. For example, DeepMind’s AlphaFold enabled big advances in using AI for biological work. Then there’s generative AI, which everyone’s talking about. You could use it in business, but be aware that if you use models like ChatGPT, the data you feed it goes in and could come out anywhere. Don’t give it trade secrets. We saw a confidential Samsung memo get leaked globally when an employee had ChatGPT transcribe it.
So, if you’re using generative AI in business, understand what AI is. It just predicts the next word; it’s not actually intelligent. Let teams play with it after your legal department green lights it, and once your C-suite understands AI and how you use it—with guidelines from your CTO or CIO.

Q: What are the benefits of staying ahead of the technological curve?
Kay Firth-Butterfield: I think it’s important to make use of the latest technologies in the same way it would have been important for Kodak to notice there was a change coming in the photography industry. Businesses that don’t at least look at digital transformation are going to find themselves on the back foot.
But a word of caution here: you can also go hell for leather and then find that you have the wrong AI or the wrong systems for your business. I would say that it’s really important to practice caution, keep your eyes open, and think about this as a business decision every step of the way.
It’s particularly important when you decide that yes, you’re ready to use artificial intelligence, to hold your suppliers’ feet to the fire—ask the right questions, ask detailed questions. Make sure you have somebody in house or a consultant who can help you ask the right questions. Because, as we all know, one of the greatest wastes of money in digital transformation is if you don’t ask the right questions and do it correctly.
Q: What are some of the ethical risks associated with artificial intelligence?
Kay Firth-Butterfield: This is where I’ve spent a lot of my career. I started my life in the AI world after my legal career, thinking about the risks of AI to humanity, business, and governments. I was the world’s first Chief AI Ethics Officer. I had to come up with a name, and I think perhaps now I might have gone for Trustworthy Technology Officer or Responsible AI Officer.
In those days, we weren’t talking about ethics. Why are we talking so much about ethics now? Well, because people’s ethics are different, and it depends on where you are geographically. The world agrees that we should worry about safety and robustness, but we should also worry about bias in the system. Bias is created when you have biased historic data that goes into the machine. All that data is in the machine, and it’s making decisions from that.
The other way it gets into the machine is from the people who code. A lot of the coders are young men under the age of 30, and only 22% of coders are women—obviously those are pre-generative AI figures, but it’s not likely to change that much because you have many more men using generative AI than women, simply because of the types of jobs we do. So, those coders bring their own values, and the values of young men may not be the same as my values.
Q: Tell us about generative AI. How does it work and what are the risks?
Kay Firth-Butterfield: What generative AI allows you to do is ask questions of the world’s data by just typing in a query. If we think back to science fiction, that’s what we’ve always wanted—to ask the computer a question and have the computer, with all this knowledge, come up with an answer.
How does it do it? Well, it predicts which word would come next in an answer. It does that by accessing enormous amounts of data—we call them large language models. Basically, the machine reads, or at least accesses, all the data that is available on the open web. In some cases, and this is potentially a high point of contention in courts of law, it also accesses IP and copyright material. We’ll see a lot of courts involved in those conversations.
When it has ingested all this data, it predicts what word naturally follows another word, and it can construct really complex answers. Anyone who has played with it knows it can give very interesting, eloquent replies just by word prediction.
Sometimes it gets it wrong. In the AI community, we call it ‘hallucinating.’ It’s basically just lying to you. That’s another problem, because we need to reach a point where we can trust the machine’s outputs. As it feeds back these lies, it encounters them again and can repeat them.
Q: Why should businesses consider the applications of generative AI?
Kay Firth-Butterfield: All of us can use AI now—it’s a hugely democratising tool. It means small and medium-sized enterprises that couldn’t have used AI in the past now can. When we talk about it, we also need to recognise that all the data—and the bulk of the data in the world—is created in America first, then Europe and China.
There are several data challenges in terms of what these large language models are using. They’re not actually using all the world’s data; they’re using a smaller subset, and we are beginning to talk about digital colonisation. We’re projecting content derived from data from America and Europe to the rest of the world and expecting them to use that content.
Obviously different cultures need different answers. So there are a lot of really beneficial aspects to generative AI, but also some really big challenges ahead.