Shaping the Future of Communication: A Dialogue Ahead of the European Summit in Venice
AI, ethics, and the future of communication: a dialogue ahead of the Global Alliance’s European Summit and AI Symposium, set to take place in Venice this May.
This interview opens the communication campaign leading up to the European Communication Summit and the AI Symposium taking place in Venice on 16 May 2025.
Bonnie Caver, SCMP, IABC Fellow, - former chair of the International Association of Business Communicators, board member of the Global Alliance for Public Relations and Communication Management and co-facilitator of the session that will shape the Venice Pledge, a global commitment to ethical and responsible AI in communication — is interviewed by Lorenzo Canu, a Master’s student at the University of Amsterdam, currently writing a thesis on the EU’s Trusted Flaggers mechanism and General Coordinator of FERPILab.
Throughout the conversation, Caver offers a forward-looking reflection on the profession’s evolving role in an AI-driven society, highlighting the importance of ethics, purpose, and global collaboration. The dialogue anticipates several of the core themes of the upcoming Summit and explores how communication professionals can lead technological transformation while remaining rooted in the values that define the field.
1. The AI Symposium, as part of the European Communication Summit in Venice, will bring together European communication leaders. From your perspective, what makes this summit particularly significant for public relations and communication professionals?
One of the most important aspects of the AI Symposium is its timing. We’ve been in a period of ongoing change for quite a while, but it’s becoming more intense. The pace is accelerating, and it feels almost like an avalanche. The change is abrupt, and it creates what I’ve called a kind of “whiplash” effect. We’re being tossed around in the snow — some days we feel completely buried, and other days we’re just being pushed from one direction to another.
This constant turbulence is leading to crises, reputational risks, security breaches, and a growing environment of misinformation and disinformation. And right at the center of all of this chaos is an opportunity for communication professionals. That places us in a very important role. We have an opportunity to lead and to support our organizations — and even governments — as we move through this highly volatile time.
Bringing everyone together at the summit is significant because we’re all in different places on our AI journey. We can take some lessons we learned from COVID-19 to guide us as the world begins to regulate and implement AI. We saw this dynamic clearly during the pandemic when Italy, for example, experienced the crisis earlier than others, and its approach to communication during that time offered lessons to places like Australia or the United States, which were hit later. It showed how much we can learn from each other.
We’re in another one of those global moments now. And while this disruption presents a chance to lead, it also highlights the need to collaborate and speak with one voice. When we act alone — as individuals or as associations limited to a single country — our impact is fragmented. That’s why the Global Alliance is so important. It gives us the platform to come together, share insight, and create a stronger, unified voice. That voice is being heard, and there are organizations globally that are eager to work with us.
So, this summit gives us the opportunity to decide together how we want to move forward when it comes to ethical and responsible AI for the profession and how we want to lead. The timing is exactly right for that conversation.
Lorenzo: Your description here resonates strongly with a paper from Jim Macnamara that I have recently read, where he focuses on the concept of Liminality — a state of transition where normal boundaries and habits are disrupted, opening space for new thinking and reimagining. Just as Macnamara frames the pandemic as a macro-liminal moment, this summit feels like a liminal space for the communication profession. If I understand correctly, the “avalanche” and “whiplash” you refer to capture the disorientation of being between old systems and new possibilities. But in that turbulence lies the chance to build something better — perhaps even the kind of communitas Macnamara describes, where shared vulnerability leads to renewed collective purpose.
2. The summit’s theme focuses on “Technology, Trends, and Communication Transformation,” with a spotlight on how emerging tech like AI is reshaping the field. Why do you feel this focus is especially relevant right now for communicators?
I think in five years, we won’t recognize our profession. It’s going to change so much, and so significantly, that we have to rise to the occasion of guiding what we want our profession to look like in an AI-enabled world.
We’re going to have to fight a battle to stay relevant because technology — at first look — seems like it can do a lot of the things people think we do. So there’s an opportunity to rise to the occasion, but we have to define what our profession is, and we have to educate people on what we actually do.
I think we’ve done a better job of that coming out of COVID-19 in terms of showing the impact we make in organizations and the value of communication. But we’re not just “communicators,” and I’m pretty adamant about that. When leaders look across their organizations, they want everyone to be a communicator, from software engineers to customer service to manufacturing front-line teams. But if we keep talking about ourselves as “communicators,” we become not different than everyone else and can be considered irrelevant.
We’re professional communicator, and that means we’re trained in guiding strategic communication that aligns with business goals and frankly enables an organization to thrive internally and externally in all environments. So I think there’s an opportunity for us to do a lot of education about our profession. But first, we have to be futurists, we have to be listeners, and we have to be learners. Otherwise, we risk becoming a historic relic.
Instead, I think we can be essential leaders in a technology-enlightened society.
A perfect example is my son — he’s an engineer. When he went to university, he had to take communication classes because the head of the engineering school said, “You won’t be a successful engineer if you can’t communicate.”
So, part of his curriculum included communication, and we often have conversations about it. His idea of communication and my idea of communication are night and day. And he grew up in a house with a professional communicator — two, actually, since my husband is my business partner and works in the same field. My son understands that there’s a difference, but outside of our profession, people tend to think everyone’s a communicator. That’s where the confusion lies.
3. The Venice Summit itself is a collaborative effort between Global Alliance and FERPI, with input from professionals worldwide. How important is this kind of international collaboration in developing ethical standards for AI in communication, and have you observed any differences in how various countries approach AI ethics in our field?
We're hoping we'll be able to gather enough data from different regions to make this work meaningful. We are really pushing for that. That’s why we kept the survey open for an extra week — to make sure we get more input.
Just to give a little bit of history: when we started looking into responsible AI — ethical and responsible AI — last year, we began having conversations with many of our Global Alliance members. We asked: “Where are you on this journey? What have you done around this topic with your members?”
What we found is that every association already has its ethical guidelines. The Global Alliance itself has 16 principles of ethics that are fundamental to the practice of public relations and communication management. One of the key parts of that framework is that we always revert to local ethics. So, if you're in Italy, for example, you refer to FERPI. If you are a member of PRSA, you refer to PRSA's code.
What we saw early on is that ethical standards are already being applied to AI in many places. They’re there. We didn't need to recreate that part.
Some organizations were ahead of the curve. CIPR stepped out early and said, “Here’s how our ethical standards apply to AI.” They provided comprehensive training and playbooks. PRSA followed, saying, “Here’s how we apply our standards.” IABC also provided its perspective. So we felt that, as the Global Alliance, we didn’t need to go back and rework ethics from scratch.
Instead, we believed we needed to take that ethical foundation and move forward with it — toward responsible AI. And responsible AI goes beyond ethics. Ethics are foundational, yes, but responsibility takes us a step further.
I define responsible AI as the intentional and transparent integration of AI into business operations, decision-making processes, and customer experiences — in ways that uphold your organization’s values, build stakeholder trust, and protect people from unintended harm. That’s responsible AI. And it’s broader than simply applying ethical standards.
So, when we created the Global Alliance’s first document on ethical and responsible AI, we didn’t try to redefine ethics. Other than stating that the number one principle is ethical rigor. Ethics are already foundational to our profession.
And to our knowledge, no other organization has yet addressed responsible AI in our profession in this way. That’s why this work is so important. This document is a collaboration among our members, and it provides us with a unified voice — a shared view of how the public relations and communication profession should approach responsible AI on a global scale.
That’s also why we included professional development and advocacy in the original set of guidelines. We knew from the beginning that this space would be constantly evolving. We need to develop our profession to be ready for an AI-enabled future. But we also have to be clear advocates — saying what is right and what is not.
And we need to be the voice that stands up and says: This is not acceptable. Misinformation is not acceptable. Disinformation is not acceptable. Using AI to do either of those things is not acceptable.
We’re just at the beginning. Now, we have to finalize what responsible AI looks like for us as a profession. One thing I think is still missing — and something we’ll talk about — is a “do no harm” clause. I’ve been talking to people outside the communication field, and many of them emphasize how critical that idea is in the broader AI conversation. I think we should consider including it.
And then we need to ask ourselves, as a community: Is this where we want to take a stand? And do we want to do it together, with a collaborative voice?
That’s why there’s so much pre-work happening before the summit. You mentioned the survey, but we’re also having conversations with our regional councils — including some that won’t be at the summit. The European Regional Council of the Global Alliance is working with FERPI to host the event, but we want to make sure we’re hearing from every region, across the globe.
There’s a lot of work happening in preparation for this, so when we come together, we can have a truly collaborative conversation.
Once we agree on the principles, we’ll need to ask: Are they right? How do we define them clearly? What’s missing? How do we make them actionable? How do we train our profession? What tools do we need to support people in applying them?
Ultimately, we need to create something that lives — something that becomes part of the DNA of our profession.
That’s why we’re calling it the Venice Pledge. That’s the goal. We don’t just walk away with a statement — we walk away with a pledge to act, and to make responsible AI a reality in organizations across the world.
I think that every time the Global Alliance comes together — and every time our regional councils meet — this will become part of the conversation. Just as we regularly talk about the 18th Sustainable Development Goals and responsible communication, I believe this, too, will become a core focus of our shared mission.
It’s essential to keep the conversation going. We can’t just throw out a document and assume we’re done. This field is evolving every day, and our approach needs to evolve with it.
Lorenzo: Your emphasis on embedding responsible AI into the DNA of the profession reminded me of the thinking behind the Global PR & Communication Model 2021. Together with Biagio Oppi, we worked on the Italian translation and developed a thesis - later published and available here - that explored its application in the national context. One of the Model’s key insights is the role of communication professionals as internal advisors — helping organizations integrate Purpose, Brand and Culture, Reputation and Risk, Communication, and Measurement into strategic decision-making. What you describe — turning values like Ethics and Responsibility into lived, operational practice — aligns closely with that vision. The Model calls on professionals not just to communicate around these pillars but to embed them in the very structure of how organizations create and sustain value.
4. Finally, looking ahead beyond the summit, what gives you hope about the future of AI in public relations? What should communication professionals keep in mind to ensure these technologies serve our industry in a positive, ethical way?
I have hope. I always have hope because we live by ethical and professional strategic standards such as the Global Capability Framework. And that’s what makes the difference between being a communication professional and being just a communicator.
We have a lot of work to do, but our profession has evolved before, and I believe it will continue to evolve. We just need to be proactive — as learners, teachers, and collaborators.
What gives me the most hope is the opportunity we have to lead. We can be a leader in this transformation, a leading voice in how it unfolds. And that’s where I see the potential. We can talk beyond technology because technology, in the end, should be used to advance the human race. It should help shape the future in ways that support our ability to work, to invent, and to build a better society.
Technology should help us do all of those things. But it’s up to us to keep doing what makes our profession — and our impact — truly human-centric.
As communication professionals, we already do so much of that work. We bring genuine creativity, critical thinking, and social authenticity. These are uniquely human skills. And they’re the strengths that can help us lead through this process of transformation.
We just have to be willing to step up and say, “We’re leading.” And then we have to do it. We need to be proactive in that leadership.
Because we do have a tendency — as a profession — to get caught up in tools. We get wrapped up in tactics. If we let that define us, we risk turning our profession into a historic relic.
Lorenzo: What you said about the importance of stepping forward and taking responsibility resonated with me. I believe the Global Alliance is a strong example of what it means to walk the talk. It doesn’t just speak about Ethics and Responsibility — it takes action to promote them. A recent example is the launch of the #NextInLine – Young Responsible Communicators initiative, which highlights a new generation of professionals who are committed to living these values. I was honored to be among those nominated, and it made me reflect on how essential it is not just to know ethical principles but to embody them in practice. It’s encouraging to see that these values are not just being written into frameworks — they’re being brought to life through people and real-world action.
To view the programme and register for the Summit and the European Summit, click here.
Any thoughts or opinions expressed are that of the authors and not of Global Alliance.