Our team spoke to AI expert, Tim Roberts, to gain insight into how technology will shape the future of the jewellery and watch industry and supply chains globally. Here’s what Tim had to say.
As a business, it is very important to think about what emerging AI risks mean for you, and it is useful to think of ‘risk’ in two categories. The first one is existing risks which are already here, but which have been turbocharged by AI. Criminal networks, for example, have much more sophisticated tools to commit fraud, hacking, or theft, including theft of data.
Another example is deep fakes. Think of it like this, I’m trying to sell a jewel or gemstone, compelling you to make a payment but I want to divert said payment to my bank account – I’ve got much more powerful tools now to do that. This constitutes amplifying or ‘turbocharging’ an existing risk.
New risks that are specific to generative AI [ChatGPT] in particular, include hallucinations, where an AI tool can create something completely fictitious. Let’s imagine a jewellery manufacturer creates an AI fashion advisor that tells you how to wear their jewellery for a special occasion. What if it starts giving you inappropriate or unsafe advice? We’ve seen examples of this where a supermarket in New Zealand had a recipe planner for ingredients you could buy, and it started creating toxic recipes, one of which invited you to make chlorine gas. Why does this happen? Well, Generative AI can make mistakes, it isn’t a truth engine, it’s a plausibility engine, so it can give wrong or factually incorrect answers. That is made worse by the fact that it can ‘learn’ from inappropriate responses. For example, it looks likely that kids were typing weird combination of ingredients into the New Zealand recipe planner to get funny answers, that may have caused it to generate more inappropriate and dangerous recipes.
Another example is when somebody went on an airline website to ask their chatbot if they were eligible for a refund on a plane ticket that they weren’t able to use after a bereavement. Under the airline’s policy, they weren’t eligible for a refund, but the chat bot they were talking to decided they were. They then sent in their ticket and asked for the refund, and they were denied. The court said if that’s what your online chatbot tells the customer, then that’s your policy. There was also a experimental chat bot on Twitter which started making sexist and racist comments having overheard and copied them. Ultimately, incidents like these can lead to a lack of trust with your customers which can in turn damage your brand. You can readily imagine that a jewellery house giving inappropriate advice to clients would be damaging to its brand and its relationships.
There’s a general belief both by science fiction writers and technology developers that we want something with a ‘human face’ to interact with. However, when technology has a human face, like a robot, it can be very plausible, making errors hard to sport. To keep things safe, some kind of monitoring of these tools is needed, even if the human isn’t the interface – either by a real human or by an independent AI tool. A human being, of course, can spot something that doesn’t make any sense very quickly. What does that mean for the jewellery industry? Well, if you have AI tools creating marketing copy, packaging ideas, and jewellery designs, it makes it much easier for others to mimic those patterns. With human-centric AI, it will be much easier to spot when something doesn’t look right, or if something could be verging on plagiarism. So that notion of keeping a human in the loop, having some kind of human oversight helps you spot some of the obvious problems that might emerge from using AI tools.
Design adds a lot of value to jewellery. There’s obviously a very big difference from a piece of gold versus a gold necklace designed by a famous designer. If I could create a range of silver jewellery that imitates one of the jewellery house’s lines, it would be easier for me to fake their products and profit from their trademarked designs. AI makes it much easier for someone from a small factory in a low-cost location to create a jewellery line simply by showing the AI tool an existing collection and asking it to make x10 more designs ‘in the style of’ a specific designer.
It also makes it easier to fake provenance. For example, if you’re an auction house and somebody has a piece of jewellery that might be stolen, AI makes it much easier to create a fake provenance for the product on sale. And finally, of course, all the packaging and marketing material becomes easier to fake. So again, if I’m in the business of making fake jewellery and importing it, AI could make my life a lot easier. In the past, creating a fake provenance used to require elaborate research and forgery of documents involving real craftsmanship to be convincing. Now generative AI can pick up details from the internet and create a composite document that is very hard to tell from a real autograph letter, bill of sale or export certificate.
This is hard for lawmakers and regulators to fully grasp, given that most lawmakers are not computer scientists, it’s a challenge for them to legislate quickly for these kinds of emerging risks. However, this can vary in different parts of the world and currently the EU is taking the lead. They have made a very conscious decision to be the most decisive digital regulator. We already have the AI Act in Europe providing some protections. However, it’s not yet providing a lot of protection for the kind of things I’ve mentioned.
The EU AI Act is now in force, and regulators recognise it is just the start of regulating a complex set of risks. It was conceived before the GenAI revolution really took off and the EU intended to follow it up with further regulations for safe AI. However, under the second Trump administration the mood in the US has clearly turned towards de-regulation, and there is more anxiety among policy makers in the EU that Europe risks falling behind the US and being too risk averse to encourage innovation effectively. So, I think we are now seeing a pause in new regulations, while the EU (and the UK) seek to understand how they can foster innovation.
Currently, the AI Act is most focused on the highest risk issues like facial recognition and is most focused on the things which are likely to damage people’s human rights. Once they’ve created that platform and they’re establishing regulators that have got the skills required, then you can see them adapting quite quickly. The EU has also shown they’re willing to take on the big technology companies, whether they’re from the US or China. In the US, it’s a slower process of trying to create federal laws because it’s such a partisan environment, but the Federal Trade Commission is really taking a lead itself on efforts to control big tech and to protect consumers from the potential harms created by AI. Above, I gave some examples from Canada and New Zealand[MG1] . If those had happened in the States, I could have imagined the FTC intervening very rapidly to say this is causing detriment to consumers. The government in the UK has taken the view that rather than create a new regulator with the expertise to oversee AI, all existing regulators are being asked to figure it out. For example, Ofcom, FCA and the Information Commissioner’s Office, which regulates privacy, have all got to become experts in AI individually.
I believe we are going to need a specialist, expert AI regulator in each country, a centre of expertise able to understand the emerging risks. Maybe it should be housed within one of the existing regulators like Ofcom. The emergence of generative AI is a huge change similar to when the internet first came online. I’m old enough to remember a time before the internet and in my first job, I couldn’t get on the internet, or e-mail people outside my firm. Then it became available and everything changed. The emergence of AI is like that. We’ve got to think of a step in regulation to protect people and I think that what will happen as consumers will demand it.
However, there’s a gap between the huge advancements being made in the AI space and regulation, with legislators playing catch up. In the meantime, businesses must ensure responsible use, especially for open-source AI like ChatGPT. Step one is to understand what is happening. There are a lot of companies out there that are starting to experiment with AI. I was at a technology industry event recently and a CEO sitting next to me told me, ‘I’ve said to my company, everybody must start using ChatGPT in the office so that we can figure out the power of it and what we can do with it.’ I didn’t say this to him, but I thought, ‘That is very scary!’ If you’re putting customer data, product data, your proprietary code, your supplier data into ChatGPT, do you know where it’s going and who has access to it? No.
If you’re a jewellery manufacturer and you have a set of criteria or target audience you’re trying to reach, and you start putting their details and their preferences into ChatGPT – for example, by asking what kind of jewellery products you should come up with, or what kind of designs would appeal to this audience, you’ve put that information out into the public domain. It is essential that as an organisation you know what you’re doing before you start experimenting. You must first understand your aim and what you’re trying to achieve: Are we using AI in marketing? Are we using AI in product design? Are we using AI to do financial analysis of our competitors or of our suppliers? Where could we use it for?
Once you’ve established that, you can understand more about the opportunities that will create the most value. Be focused and understand you’re taking risks every time you use it. For example, fashion led businesses are very concerned about how tastes of consumers are changing. There was a series of articles after the change of leadership at Adidas around what they need to do to stay in touch with the younger generations after they partied ways with Kanye West. There are some very powerful AI based tools out there for understanding customers and their preferences, and organisations can use that to feed into design. This can be done by creating a series of controlled experiments and monitoring them carefully by not letting them run indefinitely. There are numerous examples of AI applications that have worked well for a period, but then they start diverging and creating hallucinations – hence the need for human oversight.
One of the biggest downfalls of AI tools is they are not predictable at the outset. If you build and design a customer insight or a customer communication tool that writes copy for your emails or writes copy for your social media, you must continuously monitor it because in a year’s time it might start saying things you wish it didn’t. Again, we’re back to remaining human-centric. If you’re not monitoring this copy, you’re not in control of what you’re saying to your customers.
Many consulting firms like us are publishing reports highlighting what they think are the most interesting applications. I have noticed an interesting consensus around three types of applications that come up again and again. The first one was around customer personalisation. For example, when invites about store openings, or announcements are sent out, typically the same message goes out but now, it is easier to send out personalised messages. If you have data, an AI tool can help personalise your copy in a realistic way. I personally enjoy feeling like I have a relationship with a jewellery house. My wife and I have bought each other presents from Cartier on important milestones over the years and we went to Cartier in Paris on our 20th wedding anniversary, which was a big occasion for us. If they remember something like that and can communicate it to us, it’s going to make me feel very different about the next wedding anniversary – as opposed to them sending me a very generic email. This is a massive opportunity for jewellers. Customers want to feel special when we’re buying a valuable jewel, especially if it’s a wedding ring or an anniversary present.
The second example is spotting suspicious activity. AI can help you monitor suspicious activity online or in store. It is very hard to be continually vigilant about everyone that comes into retail, but if you can automatically spot suspicious behaviour, that is much more useful. If a client comes in but never ever buys anything, yet asks various questions, looks at where the cameras are, asks to use the bathroom, then disappears – those are all suspicious triggers. A human may fail to notice these patterns. And again, it doesn’t need to be prejudicial to my ability to buy something. You would need to consider if you have obtained permission from customers, have a sign that says customers will be recorded for everybody’s security and consider the fact you could be violating someone’s privacy rights. However, surveillance, is a necessary part of security and CCTV is an accepted part of visiting a bank or a jewellery store these days. Spotting suspicious activities, fraud or theft definitely falls within this area.
Why does AI bias occur? Well, AI bias is a phenomenon that emerges from using historical data to train models. For example, if I ask a GenAI tool to show me a picture of a CEO, it might show me a man, because statistically it has seen more male CEOs in its training data. But we might not want it to use that particular bias going forward. So, it needs careful correction for such biases. Conversely there are also examples where over simplified corrections to such biases give weird or unhelpful results. For example, if I ask for an image of former US presidents, it might show me a selection of men and women, even though there haven’t yet been any female presidents.
And then finally, this might be relevant to bigger jewellery manufacturers that cover the whole supply chain from the diamond mine all the way to the store. It is something that we call resource optimisation. How do I monitor my use of resources from end to end? There are many AI applications out there that help with managing logistics, how to manage my storage, and how to manage my transportation. If you have a secure supply chain you may ask yourself, how do I optimise my use of resources and assets all the way from a diamond mine to the store on Bond Street or Madison Avenue? That’s an expensive supply chain to manage and if I could find a way of managing it in a slightly better way, I could take some cost out of my supply chain and improve my margins. That’s more relevant to higher volume manufacturers.