IN CONVERSATION WITH PERNILLE TRANBERG, DATA ETHICS, KEYNOTE SPEAKER AT CITYDNA SPRING CONFERENCE IN BOLOGNA: ‘REALITY CHECK’

Written in collaboration between City Destinations Alliance and Group NAO.

Source: Generated with Midjourney

There is an increasing allure in the use of using emerging technologies amongst destination professionals, be that AI-generated content, use of virtual influencers, websites or visitor service chatbots. We find ourselves in an experimental phase where digital avatars may have seven fingers or new tools could enhance or collapse data collection. But what are the potential risks when destinations use emerging technologies? And how should DMOs balance innovation with integrity?

The conference room of CityDNA International conference in Bologna had a lot of questions in the session #HUMANMADE, so we used some of the unanswered ones to sit down with Pernille Tranberg, Co-Founder of Think Tank DataEthicsEU, to explore which key questions and considerations DMOs should be asking themselves before diving head first into adopting new tools.

In your keynote, you mentioned the importance of NOT rushing into AI advancements, why is this?

Firstly, you’ll probably loose money if you’re a first mover. There’s an AI hype created by AI companies, and it’s profit driven. Hype is the gap between fantasies and real life. Cool down, get to know the technologies, and play with them, before you adopt it in your company.  Often, it’s the ‘second movers’ who learn from the experiences of others that find better, more ethical applications of technology. Instead, I think us humans should be doing the running. What do we want? What can, and do we want to use technology for? Instead of it outrunning us, and our wishes, we humans should be doing the running.

What do you think about the use of human-like avatars for branding and influencing?

I think it is a joke when we try and humanise technology. Much of this technology is for fun, something commercial. I don’t think it should take over your brand. This trend of anthropomorphising technology, where we attribute human traits to AI, can lead to unnatural attachments, where you get feelings for it. If you are not a human being in balance, if you don’t have self-confidence and if you are lonely, you are easier to manipulate.

Yes there are risks with AI, but isn’t it important to engage and educate, instead of merely warn against it? For example, should AI be more intergrated into the school curriculum?

We should educate. I am not saying we shouldn’t, I am saying we should use it responsibly. But we need to take time to find out how we want to use it, instead of either banning it or running into it uncritically.

Do you see any place for interactive technologies relevant for the tourism industry?

If I was considering going to Bologna, and I could get a virtual tour, then that could be more lively than just reading about it. Or if I go to a museum or on a tour, and I don’t understand the language, then maybe I would use technologically enabled live translations. I think it is useful when technology is used as a co-pilot, not to replace something. If I travel all the way here, I want an analogue experience. Tourism is about the analogue experience.

For destination management professionals exploring new tech tools, what critical questions should they ask before adoption?

Firstly find out, where is the company headquarter location? If they have a headquarter in the EU, Switzerland and Norway, it will usually be a better company, and that is only because there are stricter laws in Europe. Secondly, what kind of data do they take from you when you sign on? This will help you understand whether the tool will take personal or sensitive data. Thirdly, is this nice to have or need to have? If it is only something nice to have, then think twice, avoid the pitfalls of using technology for technology’s sake.

Some might argue we have a fairly apathetic culture in relation to the use of our data now. What do you say to people who don’t really care about handing over their data?

I’d ask them, did you stop voting as well? If you don’t vote, then I understand, then you’ve given up, you have resignated your democratic rights. Your data is your identity, it can be used against you, and profoundly impact your life. Whether that be influencing job prospects, loan applications, right through to healthcare. Not wanting to have control of your data is the same as not wanting to have control of your own life.

How do you view the role of DMO in balancing the use of big tech for content distribution, against the need to protect data and maintain ethical standards?

DMOs face a real dilemma here. But the question becomes: Do you want to contribute to the economic success of big tech firms like Microsoft and OpenAI? Or, would you prefer to support European models that might operate under stricter data protection frameworks? There is a crucial distinction to be made between using models that use data without proper licensing or consent, and models where creators have explicitly granted permission for their use. By choosing to provide data to responsible European providers, DMOs can support ethical data.