When Robots Rule the World

Ali Abel
34 min readJan 11, 2021

Artificial Intelligence and its impact on the public relations profession

A variety of toy robots stand on a shelf.
Photo by Mina FC on Unsplash

Originally written for GPRL 6105: Media, Culture and Society, part of Master’s of Public Relations program at Mount Saint Vincent University; April 9, 2019.

It is easy to be afraid of things we do not understand. The ocean, tigers, Amazon’s Alexa and Apple’s Siri. Will the ocean’s tide pull me out to sea, where I will be stranded for the rest of my life? If that tiger pounces on me, will he purr and lick my face like a house cat? Is Alexa tracking my every move and sharing it with the government? How does she know that I need to buy milk? What is she doing with all that data?

New technologies are being introduced at an alarming rate, making it is easy to forget that many of these technologies are based on research that was completed, and inventions that were created, decades ago. From the Turing Machine to the first super computer, many of us have not known a life without some sort of technological presence. What we need to be afraid of is the pace of technological advancement that now exists, and how these new technologies will affect all aspects of our daily lives, for better or for worse.

According to the World Economic Forum, we have entered the Fourth Industrial Revolution. The First brought steam power to mechanize production; the Second saw inventions in transport, telecommunications and manufacturing, including the use of electric power to advance mass production; the Third gave us the Internet and other technological advancements which have brought us into the digital era. The Fourth Industrial Revolution is “an age in which scientific and technological breakthroughs are disrupting industries, blurring geographical boundaries, challenging existing regulatory frameworks, and even redefining what it means to be human” (2019). This revolution is introducing the world to technologies like artificial intelligence (AI), blockchain, drones and precision medicine, all of which are changing our lives, and transforming businesses and societies, while also posing new risks and introducing new ethical concerns. It often took hundreds of years to move from one industrial revolution to the next, but the pace of progress is quickening, and we need to keep up.

Within the public relations/communications profession (I will use these terms interchangeably throughout this paper), AI is starting to play a role in how we do our work. From automatically scheduling social media posts to writing media releases, technology is making an impact. We may not have robot assistants any time soon, but we will have the ability to use AI to help us identify emerging issues and risks that may have a negative effect on our organization. Understanding the technology, how it works, and the benefits and drawbacks is key if we want to be strategic leaders. As Galloway and Swiatek point out,

Education (self-initiated, or otherwise) will be key to helping public relations practitioners remain aware of the latest developments. In the short term, practitioners should seek training about the key aspects of artificial intelligence and its uses. In the long term, they should build on this foundational knowledge and ask critical questions about the roles that AI will play (2018).

As public relations professionals, we are in an ideal time where we can have a say on the role AI will play in our profession, and we can play a role in determining our futures. With limited scholarly research on the subject, it is up to us to learn about the technology and to educate others — both within the profession and the leaders of the organizations we work for — of the benefits, uses and drawbacks AI can have in the work we do.

Literature Review

As mentioned earlier, not much scholarly research has been done on how AI will affect the public relations industry, and much of the current knowledge is available in trade publications or technology and business magazines, such as Communication World, Forbes, and Fast Company. While experience and lessons learned from practitioners is important, more research needs to be done to examine the long-term impact of technology on the profession.

Galloway and Swiatek (2018) have done some of the most recent research on the topic, acknowledging that AI “is attracting increasing attention in the promotional industries (public relations, marketing and advertising) as practitioners — and, somewhat belatedly, scholars — recognize its productive potential.” Their work recognizes that, at this point in time, AI technology is largely about task automation such as social media monitoring and predicting media trends, but future uses could involve identifying potential risks, using chatbots for client/customer service tasks, and through deep machine learning, improving message reception, retention and creation. The authors do raise questions around ethical and moral considerations, stating “if a given public relations activity is essentially performed by automated systems, who (or what) is to be held accountable for the outcomes?”

Most of the texts I reviewed focused on the broader impact of AI on society, not necessarily the public relations industry, however relationships can be made between the two. In her chapter in Megatech: Technology in 2050, Lynda Gratton points out that one of the benefits of machines is their ability to make complex decisions, asking the question “As machines become more sophisticated, is this role of decision-making as a uniquely human skill coming under increasing pressure?” (2017). According to Gratton, there is evidence that in certain circumstances, machines can make better decisions than people. In times of crisis, will a machine be able to make the correct ethical and moral decision, or is that a skill that will remain solely with humans?

Several authors point out that robots will not take over our jobs in the near future, but that we will need to learn how to work alongside them. Adrian Wooldridge describes “clever knowledge workers” who can use technology to increase the amount of work they can complete in a day, academics who can write more papers, lawyers who can consume and interpret more material for a case, and journalists who can find better stories (2017). Ann Winblad adds that tasks we thought could only be done by humans, can be done better when aided by machines, and states that “Fed the right data, algorithms can schedule, analyse, decide, predict, diagnose and even write news stories, rapidly penetrating the realm of easily repeatable tasks, complementing us with intelligence applied to enormous datasets” (2017).

Through all of this, it must not be forgotten that what we are experiencing is not new. Technology has been changing the face of various professions for a long time, and Ryan Avent reminds us of what has taken place over decades in the journalism industry:

Digital technology cost many printers their jobs long ago. Then came the internet, which allowed readers all over the world free access to a torrent of news and analysis, undermining subscription-based forms of journalism, while services such as Craigslist gutted newspapers’ advertising revenue. Now firms such as Facebook and Apple are rolling out curated news feeds which promise to serve readers with the best stories from publications around the world — undercutting another of the valuable roles played by skilled editors (2016).

Kent and Saffer make a similar point in their paper “A Delphi study of the future of new technology research in public relations” (2014), though a few years prior to Avent. The authors asked the question “Thinking 10 years out, how do you see software, technology, social trends, etc., influencing Internet communication or social media?”, and warned us that we need to stop seeing technology as a passive phenomenon. The authors issued a call to action for public relations professionals to “devote some time to understanding each new technology in light of what already exists,” and to be the leaders of establishing, and enforcing, ethical norms and rules of conduct for the use of technology in the profession.

So…what is Artificial Intelligence?

We throw around terms like artificial intelligence, automation and machine learning like hot potatoes, but many of us do not understand what these technologies do. Until we understand what AI is, we cannot grasp how it will affect the jobs of public relations and communications professionals.

Artificial Intelligence

In 1968 Marvin Minksy of the Massachusetts Institute of Technology defined AI as “the science of making machines do things that would require intelligence if done by men,” (Boden, as cited in Crevier, 1993). What Minsky was referring to were digital computers that can be “made” to do things after being programmed in a certain way. Minsky’s definition is a good starting point, but does not quite explain the science well enough.

According to Winston (1992), there are numerous ways to define the field of AI, including “The study of computations that make it possible to perceive, reason, and act.” This definition does not really explain how the technology works, but he goes on to explain the term from the perspective of goals, and that AI can be understood as part engineering, part science:

  • The engineering goal of artificial intelligence is to solve real-world problems using artificial intelligence as an armamentarium of ideas about representing knowledge, using knowledge, and assembling systems.
  • The scientific goal of artificial intelligence is to determine which ideas about representing knowledge, using knowledge, and assembling systems explain various sorts of intelligence.

In 2003, Callan explained that AI is not just about engineering, but also a science with a goal of understanding our own intelligence, as well as the intelligence of other animals. He goes on with a more detailed explanation, stating that “An AI application is designed to mimic some aspect of human intelligence, whether that be reasoning, learning, perceiving, communicating, or planning…[the systems] must also be flexible so that they can respond to events that were not foreseen by the developers of the system” (2003).

Callan also highlights what an AI program is not:

An AI program is not like a database program that queries for information from a table, it is not like a spreadsheet program that computes total sales by category of item, and it is not like a program for constructing and sending emails. All of these programs follow clearly defined sequences of instructions determined by what the user requests. In these conventionally programmed systems, it is possible to predict what the user will want to do and program instructions accordingly. AI programs are somewhat different. Programs are still constructed by programmers, but the program they implement must be able to execute in a way that is highly flexible (2003).

To break the term down to an even more basic level, The Canadian Oxford Dictionary defines artificial intelligence as “the field of study that deals with the capacity of a machine, esp. a computer, to simulate or surpass intelligent human behaviour.”

While this definition provides something more concrete, there is still confusion as to what artificial intelligence is, which is why it is so hard for public relations professionals to know how the technology will affect our roles within an organization.

Additionally, it should be noted that there are several other key components to AI, as explained by Techopedia. Knowledge engineering is an important part of AI research, and as their website explains:

Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task (2019).

Machine Learning and Automation

Machine learning and automation also play an important role in AI technology, and are key concepts to understand in the exploration of how AI will affect the communications profession. Machine learning is an application of AI that is based on the concept that we can build machines to process data and learn on their own, without constant supervision and monitoring by humans. Machine learning allows the computer to examine text and determine if it is positive or negative, or if a song will make listeners happy or sad. As we will see later, machine learning is also being used to allow companies to offer automated customer service that is just as useful as human customer support (Mills, 2018). However, if you can remember back to 2016 and Microsoft’s AI chatbot Tay1, you know that machine learning can go very bad, very quickly.

Automation has one purpose, which is to let machines perform repetitive tasks. Automation may sound very similar to AI, but with a big difference: automated machines are driven by a manual configuration. As Dave Evans explains it in the simplest way, AI is designed to mimic humans, automation is designed to follow orders (2017).

What is happening today?

As a communications practitioner, I am aware that AI is beginning to play a role in the profession and in the tools I use to do my job. But I wondered if my peers felt the same way. While I did not have time to perform formal research, I decided to ask my friends and colleagues the following question:

“Can you tell me how AI is already part of your job, and/or how you think AI will affect your job in the future? Any examples of how you think you are using AI now would be great.”

If people responded asking what artificial intelligence is, I let them know that I wanted them to respond based on their understanding of what AI is.

The question was sent via email to approximately 30 colleagues at the University of Calgary (where I work), and 15 people on the board of directors of the Calgary chapter of the International Association of Business Communicators (IABC) (I am also on the board of directors). I received eight responses, three from my work peers, and five from my fellow directors. As expected, the responses were quite varied.

Each of the respondents from my work peers did not indicate any current uses of AI in the work that they do or the tools they use. My board peers, whose roles vary from independent consultants, to government employees, to corporate communicators, identified several ways in which AI is part of their current role.

NC uses the A/B testing feature in MailChimp, an e-marketing platform. This feature can be used on the subject lines of customer emails; the program will choose the subject line that is performing best and send the campaign out with that subject line (Figure 1). MailChimp also has a scheduling feature that will review data from previous campaigns to find the optimal time to send an email to receive the highest open rate and click-through rate.

A screenshot of the A/B test screen in MailChimp
Figure 1: Setting up an A/B Test on a subject line in MailChimp.

NC also uses Facebook’s AI to evaluate content that is performing well amongst the organization’s followers and can buy ads to boost a post to reach more people (Figure 2). KD, who works remotely for a non-profit organization in Europe, uses similar tools to track social media performance and trending topics.

Screenshot of the boost post option on Facebook.
Figure 2: Options to boost a post on a Facebook company page.

RE, who works for a digital marketing company, also uses Facebook’s AI in his work. He can upload a list of customers or prospects into the platform to create a “lookalike” audience. As RE states, “It’s a bit of a black box as to how that happens, but Facebook finds correlations among the audience I provide, and then expands it out to match other Facebook users it thinks look like them” (personal communication, March 11, 2019). Using this tool, he can create target audience groups for his clients to promote their product and services.

To get even deeper than RE’s experience, AI can use historical data, such as recent purchases made by users, to create personalized communication that will catch the customer’s attention. As O’Brien points out (2019), this will eliminate segmentation, as AI will be able to extract individual preferences, allowing companies to market directly based on those preferences.

Independent consultant RP sees AI in many of the tools she uses for her business, including email scheduling tools in Gmail, and reporting tools to help her determine optimum times to post social media content. She also uses a tool called IFTTT, which, according to their website, is “the free way to get all your apps and devices talking to each other.” RP uses the app to manage and synchronize multiple calendar items to ensure she meets client deadlines. KD also uses a tool, which she refers to as an artificial assistant, to organize her schedule and deadlines.

In addition to asking my peers how they use AI, I read many articles from trade publications and popular magazines about current technology trends in the public relations profession, many of which align with feedback received from my colleagues.

According to Jason O’Brien, many businesses have introduced the use of chatbots, which can help customers navigate a website, answer their questions, and even help book flights and hotel rooms (2019). NC, KD, and RP all mentioned that they are either using Facebook’s chatbot feature to respond to messages from customers, or are testing out chatbot possibilities for clients.

For internal/employee communications, Tangowork designs chatbots “that support, nudge or educate your workforce” (2019). The tool gives companies an easy way to communicate with employees over a variety of messaging platforms, such as Skype, SharePoint, Slack and Facebook, and can automate conversations when employees have frequently asked questions (Aspland, 2018).

Companies like Convercent are using chatbots, predictive analysis and natural language processing to develop an interactive Code of Ethics for employees, allowing users to find answers and report issues easily, rather than combing through a 100-page PDF to find an answer. Finn is a chatbot that pops up to respond to questions or allow an employee to report an issue directly in the chat. On the backend, an analytics dashboard alerts company leaders of potential issues by notifying them when there is increased activity (Dishman, 2019).

Chatbot interfaces will likely become more prevalent in organizations, and a key tool for public relations professional to learn and manage, as they have the ability to remove real and perceived barriers when it becomes the “first point of entry for employees to raise concerns, ask questions, and have a meaningful dialog” (Dishman, 2019). Experts are also predicting that customer experiences will move beyond interacting with chatbots to virtual agents, complete with faces and personalities that will be able to handle even more customer service tasks (Curkpatrick, 2019).

In addition to boosting posts in Facebook and using tools to schedule social media posts to reach the most people and get the most clicks, I have also used tools such as the Hemingway App (Figure 3) to clean up website content and increase readability. After inserting text into the text box, the app will highlight sections that can be improved or simplified, making content more concise and easier to read. Similarly, Grammarly uses AI to help users “Compose bold, clear, mistake-free writing,” which can be used for emails or social media posts. The company claims that the tool will help users with grammar and spelling, as well as language choice, conciseness and formality.

Figure 3: The Hemingway App highlights long sentences, passive voice, and better word choices for users.

Like The Hemingway App, Textio is a tool designed specifically for job ad writers. Users start typing content for the ad, and the tool will score the quality, based on its analysis of a global job ad database. The tool will then help users ensure they have covered all necessary information in the ad, the tone, structure, grammar, spelling and punctuation. According to Aspland (2018), the tool “not only provides an overall assessment, but it also looks at each word and the impact it has. It then provides advice on how to improve your content and re-rates the ad in real time as you make adjustments.”

AI-generated content is already being used at major newspapers in the United States. As Subhamoy Das points out, The Washington Post uses in-house storytelling technology, called Heliograf, which writes news stories and social media posts (2019). The Associated Press creates almost 4,000 quarterly earnings stories with AI writers, and the Los Angeles Times uses QuakeBot, which creates quick and simple posts and tweets about earthquakes, pulling real-time data from the U.S. Geological Survey.

Das also highlights the tool Quill, a program that “humanizes data with technology that ‘interprets your data and transforms it into ‘intelligent narratives’ at speed and scale’.” He names Forbes, Credit Suisse and Groupon as users of the tool.

Recently, I was working on a communication project for a French language program offering. While we had a professional translate our media release, website content, and email blast for us, there were moments when I needed to quickly write a social media post, or to write the headline for the email blast. Rather than pay for additional translation costs, I turned to Google Translate for help. Google’s systems use “statistical machine translation,” which creates translations based on patterns found in large amounts of text. According to an informational video created by Google, the computer can learn language the same way humans can, by understanding grammatical rules and vocabulary words and how to construct sentences (2010). Google’s computers are then left to learn all the different rules and exceptions across different languages based on texts, books, and other documents from around the world that have already been translated. Google recognizes that not all translations will be perfect, but by having users continuing to add translated text to the database, the system will become smarter and more accurate.

Public relations professionals often plan, execute and attend many events as part of their jobs, and with events comes a ton of photographs of happy people networking. It is often difficult to determine who is in the photographs, but Kristy D. uses Google Drive to help with task, as the company uses machine learning to categorize images. Through the launch of its Photos app, Google included consumer-facing AI features that has transformed thousands of untagged photos in a user’s library into searchable databases (Byford, 2019).

Google’s technology was created on a deep neural network trained on data that had been labeled by humans, called supervised learning. This technology trains the network on millions of images so it can look for visual clues at the pixel level to help identify the category. According to Byford, “Over time, the algorithm gets better at recognizing, say, a panda, because it contains the patters used to correctly identify pandas in the past…With further training, it becomes possible to search for more abstract terms such as ‘animal’ or ‘breakfast,’ which may not have common visual indicators but are still immediately obvious to humans.”

Not only can Google identify people, pets and inanimate objects in your photos, it can also automatically caption your YouTube videos (Figure 4), a tool I have used a few times in my work. The program operates on machine learning algorithms — Google’s automatic speech recognition (ASR) — that can transcribe speech in ten languages (Google Blog, 2019).

What does the future hold for the profession?

You will recall that part of the question I asked my peers was how they think AI will impact the profession, and the results varied depending on the organization they currently worked for or if they were an independent consultant.

Responses from my colleagues at the university mainly focused on the use of chatbots to answer questions from prospective students. As we saw with the responses from my peers in different organizations, chatbots are already used to enhance customer service, so there is clearly some catching up to do in the post-secondary environment. This is not to be mistaken for a lack of research being conducted in the field at universities. In fact, two recent announcements for research centres at the University of Toronto and the Universities of Montréal and McGill show strong support for AI research in Canada.

Supporting AI Research in Canada

The Montréal Institute for Learning Algorithms (MILA) is a partnership between the Université de Montréal and McGill University that will bring together the different aspects of Montréal’s AI “ecosystem,” according to Valérie Pisano, the president and CEO of MILA. (Serebrin, 2019).

At the announcement for MILA, Pierre Fitzgibbon, Economy Minister for the National Assembly of Québec stated that “We need AI, we need machine learning, we need the development of new technology to get people more efficient, I think that over the long run it’s going to be beneficial, furthermore, I think we’re going to see that most of the new jobs created through AI will be high-quality jobs, high-paying jobs” (Serebrin, 2019).

At the University of Toronto, the new Schwartz Reisman Innovation Centre will help further research into the intersection of technology and society, specifically how AI, biomedicine and other disruptive technologies will improve lives. According to a media release, the centre will “facilitate cross-disciplinary research and collaboration and will draw on the U of T’s signature strengths in the sciences, humanities and social sciences to explore the benefit and challenges that AI, biotechnology and other technological advances present for our economy, our society and our day-to-day lives” (2019).

The Government of Canada demonstrated its commitment to advancing AI research in Canada with the CIFAR Pan-Canadian Artificial Intelligence Strategy, a $125 million project that brings together the Alberta Machine Intelligence Institute (AMII) in Edmonton, MILA, and the Vector Institute in Toronto. The strategy has four primary goals:

  • To increase the number of outstanding artificial intelligence researchers and skilled graduates in Canada.
  • To establish interconnected nodes of scientific excellence in Canada’s three major centres for artificial intelligence in Edmonton, Montréal and Toronto.
  • To develop global thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence.
  • To support a national research community on artificial intelligence (2018).

Privacy

Beyond implementing chatbots to improve customer service for prospective students, my university colleagues saw a few other general areas where AI might have an impact on the profession. SF and her team have a broad idea of who their target audience is and believes AI will provide more specific data about their audience, which will allow them to more narrowly target their messaging. SM agrees with SF and thinks AI will help her team identify likely individuals to target in the recruitment process.

SF is concerned about how AI will affect privacy and information sharing with vendors, acknowledges that it will be important to really understand the rules within the Freedom of Information and Protection of Privacy (FOIP) Act, and sees an opportunity for the university to influence the rules and potential future changes. Fear of how our data is being used by these new technologies is not new. Studies have shown that it is not that people are afraid of AI, robots, or automation, but that their level of comfort with technology depends on the type of interaction they have it is. In a study conducted by researchers Markus Appel of the University of Wuerzburg and Timo Gnambs of the Leibniz Institute for Educational Trajectories, when interviewees were interacting with robots that were cleaning homes and offices, or rescuing humans from dangerous situations, feelings towards the robots were usually positive. But when people were asked about how they felt about robotic nurses and self-driving cars, attitudes became more negative and fearful. According to the researchers, interviewees’ criticism of the technology increased as their situations grew more intimate, regardless of the robot’s appearance (Diaz, 2019). If we are going to incorporate AI into our work at the university, we need to ensure that the relationships between clients and the technology are kept to basic levels if we want to maintain the currently level of trust among parties, especially when personal information is involved.

Audience Research & Analysis

At non-educational organizations and with independent consultants, there is agreement that AI will be able to help public relations practitioners with audience research. NQ, who works for a municipal government, wonders if AI will need to be considered an audience in and of itself, or will it be another channel to add to our toolboxes. She asks “If people start using AI a lot more, will it be considered an extension of them? Already people have two selves, their in-person self and their virtual self, usually behaving in two different ways. Additionally, everyone is now their own mode of broadcast. I think communicators are going to be challenged in identifying who they are communicating to, and the best way to get messages to them” (personal correspondence, 2019, March 17).

Both NQ and KD agree that AI will be an important tool for audience analysis, metrics, data scanning and reporting, allowing communicators to spend more time on strategy, creative writing of messages, and storytelling. KD goes on to add that with large amounts of customer or employee data, communicators could analyze trends in employee engagement over time and make predictions for the best times of year to undertake change projects, or to provide real-time insights into customer behaviour that would not be possible via traditional method of research (personal correspondence, 2019, March 18).

These views about the positive impacts of AI for professional communicators is echoed in popular media. AI will give us the ability to more extensively analyze the digital landscape and to report on more accurate insights, real-time updates, and the evaluation of new and emerging trends. AI will allow us to deliver news and information to our target audiences in innovative way through virtual reality applications. And AI will allow communicators to provide better metrics for our organizations, and to flag inconsistencies, discrepancies and conflicts quickly and more accurately. Forbes’ own Communications Council believes that AI will make us better humans overall, stating “AI’s real value is in enhancing, supporting and amplifying human truth, human experience and, ultimately, human freedom” (Petrucci, 2019). The author goes on to say “I believe that organizations will increasingly become the purveyors of these things in the future. Ironically, AI can help enhance what it means to be human.”

Virtual Reality and Voice

Two other tools thought to be important moving forward are virtual reality and voice. Virtual reality will be useful for communicators in nonprofit organizations and charities to be able to send powerful messages to their audience. Virtual reality can create strong emotional connections with an audience that can change attitudes, drive action, and influence policies. Ultimately, it can be a powerful awareness, fundraising or advocacy tool (O’Brien, 2019).

Voice search and its impact on search engine optimization (SEO) is an area companies need to start looking at immediately, as more and more consumers use tools such as Siri, Cortana and the Amazon Echo to perform searches. Google reported that in 2017 voice searches made up 20 percent of all searches, and numerous sources cite a ComScore report that says 50 percent of all searches will be voice searches by 2020, which means that communicators and digital marketers need to start prioritizing a “voice-first” approach to marketing today (Das, 2019). In fact, I was at the World Conference of the International Association of Business Communicators in Montréal in June 2018 and the room for a presentation on the newest trend in communications (voice search) was standing room only. We know it is coming, we just need to start doing the work to make the change.

Automated Journalists

What about the technology that can write your report or news release for you? Many media outlets are claiming that AI can assist journalists in high-value work, freeing up time to dive deeper into a story. As Nicole Martin wrote in Forbes, “The AP estimates that AI helps to free up about 20 percent of reporters’ time spent covering financial earnings for companies and can improve accuracy. This gives reporters more time to concentrate on the content and story-telling behind an article rather than the fact-checking and research. All in all, this could truly benefit journalism” (2019).

Organizations getting on board

We also need to wonder at what rate organizations are adopting new AI technologies, and how quickly we need to learn new systems and tools to adapt to the changing landscape of how we do business. In a recent article by Forbes that looks at surveys about the state of AI adoption,

  • 92 percent of organizations are increasing how quickly they are investing in AI and big data, and 62 percent of those organizations have already seen measurable results;
  • 36 percent of organizations say that AI and machine learning have played a significant role in their digital strategy, and 45 percent see AI and machine learning as the most important technologies to play a significant role in their digital strategy three years from now; and,
  • 71 percent of retailers say AI is creating jobs, with more than two thirds of those jobs being at a senior level, and 75 percent of retailers report that AI has not replaced any jobs in their organization so far (Pross, 2019).

Based on these results, it is imperative that professional communicators get on board quickly, because our organizations are implementing new technologies in their digital strategies now, and we need to be prepared to help lead the charge.

Ethical and Legal Issues

If professional communicators are not going to lose their jobs soon, and we begin to incorporate more technology tools into our work, it is imperative that we understand the ethical and legal issues that come along with the technology. Do we need to tell our clients, customers or employees that algorithms and AI systems are being used in our organizations? How transparent do we need to be about the types of data we are collecting, and what we are using that data for? Do we know who is building the programs and systems and the biases that may have inadvertently be coded into them? These are all difficult questions to answer, but are key questions to know the answers to.

I reference Google several times throughout this paper, as they are one of the biggest drivers behind AI, machine learning and algorithmic technology in the world. In April 2018, more than 3,000 employees protested the company’s bid on Project Maven, a Pentagon project that would leverage AI to analyze videos taken by drones. As a result of the pushback from employees, Google said it would stop its efforts to win the contract, and created a set of AI principles, which specifically state that the company would not create products that “create or reinforce bias, make technologies that cause harm, or build systems that use information for surveillance purposes” (Pangburn, 2019). As a communications professional, knowing that Google recognizes some of the ethical issues around AI and has created principles for themselves, provides some comfort in knowing that the Google tools I choose to use will abide by these principles.

Some cities in the United States are also implementing transparency rules for algorithms that are used in the public sector. In 2017, the New York City Council adopted an algorithmic accountability law which created a task force to provide recommendations on how city agencies should use and share information related to automated decision systems, and how agencies will address instances where people are harmed (Pangburn, 2019). If state or provincial and federal laws are not created regarding algorithms and data use, organizations may need to create their own rules for governance, much like Google has. However, as Amanda Levendowski, a clinical teaching fellow with the Technology Law & Policy Clinic at New York University, points out, “It’s impossible for the public to engage in discourse about whether an AI system is fair, accountable, transparent, or ethical if we don’t know that an AI system is being used to watch us — or if we don’t know the technology exists at all” (Pangburn, 2019). Before organizations can create governance and principles, they must inform their customers, clients and employees that the technology is being used.

Fake news and advertising algorithms

We have all heard the endless stories of how Russia planted fake news stories during the last American presidential election, or how people see ads for screwdrivers in their Facebook feeds after searching for the best screwdriver to buy. These are examples of algorithms at play, and public relations professionals must recognize that these are being used and how they can affect consumers. Algorithmic amplification is when certain pieces of online content becomes popular at the expense of other viewpoints, and our clicks, likes, comments and shares are what powers the machine. These algorithms were created by companies such as Facebook, YouTube, Netflix and Amazon for the purpose of helping consumers make decisions. The choices users make help train the algorithm to recommend certain products to similar users. In a recent article written for The Conversation Canada, Swathi Meenakshi Sadagopan of the University of Toronto asks:

Did you click on something because you were inherently interested in it, or did you click on it because you were recommended it?.. The vast majority of algorithms do not understand the distinction, which results in similar recommendations inadvertently reinforcing the popularity of already-popular content. Gradually, this separates users into filter bubbles or ideological echo chambers where differing viewpoints are discarded” (2019).

In additional to everything else we must learn about how algorithms, machine learning and AI work, we must also figure out how to ensure that we are not creating biases for what our clients, customers and audiences want to see.

Like the fake news phenomenon, I recently learned about “deepfakes,” where developers are using deep learning technology to identify the facial movements of a person and then creating extremely realistic, computer-generated fake videos, complete with realistic lip movement, facial expressions, and background and audio. With advancements in technology happening rapidly, it has become increasingly difficult to tell the difference between a real video and a deepfake. According to the 2019 Edelman AI Survey (Figure 5), respondents from the general population and tech executives believe deepfakes will further erode public trust, and one-third believe these videos could lead to war. As the authors point out, “Determining authentic content will be increasingly critical.”

Figure 5: Percentage of respondents in the 2019 Edelman AI Survey who are concerned about deepfakes.

Crisis Communications

What about crisis communications? There is a lot of talk in the trade publications that AI will help take the emotion out of crisis communications and will allow organizations to make decisions more quickly and be better prepared to respond more efficiently. As described by Krista Davidson in Communication World Magazine, “Communicators will be able to advise their leaders on key crises and issues even before they arise, using data to assess potential damage and the best path forward” (2019). I believe crisis communications is a sticky area for the use of AI. Sometimes emotion can play an important role in the decisions that are made, especially when it comes to injuries or death. I would recommend organizations use caution when considering AI and other technologies in their crisis communications plans.

Algorithmic Bill of Rights

A Fast Company article written in March 2019 talks about the need for an algorithmic bill of rights that will protect society, and includes a set of responsibilities for users of decision-making algorithms. In the article, author Kartik Hosangar outlines four main pillars for the bill of rights:

  • “First, those who use algorithms or are impacted by decisions made by algorithms should have a right to a description of the data used to train them and details as to how that data was collected.
  • Second, those who use algorithms or who are impacted by decisions made by algorithms should have a right to an explanation regarding the procedures used by the algorithms, expressed in terms simple enough for the average person to easily access and interpret. These first two pillars are both related to the general principle of transparency.
  • Third, those who use algorithms or who are impacted by decisions made by algorithms should have some level of control over the way those algorithms work — that is, there should always be a feedback loop between the user and the algorithm.
  • Fourth, those who use algorithms or who are impacted by decisions made by algorithms should have the responsibility to be aware of the unanticipated consequences of automated decision making” (2019).

The Montréal Declaration for a Responsible Development of Artificial Intelligence, announced on November 3, 2017, aims to encourage a “progressive and inclusive orientation to the development of AI.” The Declaration lays out a series of ethical guidelines for the development of AI, including seven key values of wellness, autonomy, justice, privacy, knowledge, democracy and accountability. The ultimate goal of the Declaration is to offer recommendations to improve the common good for everyone in society who will be affected by AI (2017).

I hope that any regulation that is created for the implementation of AI technologies in organizations will help to eliminate biases that are built into the technology and influence the data that is produced. Biased data can have harmful implications for people, from being denied bank loans based on race or gender, or being excluded from hiring opportunities. Communications professionals will be a valuable source to provide oversight and guidance to help organizations avoid creating biased data.

Respondents in the Edelman AI Survey also believe that AI regulation is critical, and most respondents feel that this regulation should be managed by a public oversight body, not by the organizations who are developing and deploying the technology (Figure 6).

It is clear that some type of regulation or principles needs to be established relatively quickly, as the pace of adoption of AI and machine learning is rapidly increasing, and organizations must be able to maintain the trust of their employees and customers. Organizations can lead the charge, but ultimate oversight and regulation should come from government and other public bodies.

Am I going to lose my job?

The short answer for public relations professional is not any time soon. For many workers in manufacturing, construction, maintenance, transportation, agriculture, and food preparation, AI and automation will affect jobs at a much quicker rate. As we get deeper into the “AI era,” automation will have a disproportionate impact on certain people and places (Misra, 2019). For those in the fields of public relations, communications and marketing, as we can see throughout this paper, the increase of AI and new technologies by organizations will likely create more work and job opportunities for us.

While AI will replace jobs in the areas previously mentioned, jobs in areas such as data analysis, software development and engineering, and data integrity are likely to increase. New professions will also be created, such as virtual-world designers, which will require a high level of creativity and flexibility, and will require further education for those put out of work due to AI and automation (Harari, 2017).

KD points out that the role of public relations professionals has a very human aspect to it, and the qualitative side of communications and branding will be tougher to add value from AI, compared to marketing which already has more quantitative measures in place, and is being influenced by AI tools faster than public relations or internal communications. She does not think that our roles will become redundant as there is not currently an affordable AI solution that can respond to a crisis or manage a change project (personal communication, 2019, March 18).

RP recognizes that AI can be used negatively, which can be seen in spam bots, comment bots, transaction recordings, and so on. But she also believes that AI can help weed out the weak marketers and communications professionals. “Right now, I think many communicators can just write/post/create something without much thought or strategy. AI is going to reward the people who know how to use it best to enhance their customer or client experience, and, essentially, punish those who try to take advantage of it” (personal communication, 2019, March 11).

So how do professional communicators stay relevant? We know that AI technologies will not be putting us on the unemployment line in the near future. But we cannot stagnate. We will need to continue to learn about emerging technologies, so that we can educate our organizations on how these technologies work and how they can benefit our employees, customers and, ultimately, our bottom line. In addition to the multiple hats we already wear, we will need to become data analysts, bias breakers, builders of virtual worlds, and technology evangelists. We will need to start developing compelling content across multiple media that provides an emotional connection and resonates with our audiences in order to build and maintain trust. As Concordia University’s Nadia Naffi explains, we will need to become “Human+ workers,” individuals who work alongside machines to reach collaborative intelligence (2019). And the next generation of employees will need to be trained as human+ workers before they enter the job market, while the existing workforce will need to continuously upgrade and learn new skills.

What will happen if we do not do any of this? Harari puts it best:

“The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms. Consequently, by 2050 a new class of people might emerge — the useless class. People who are not just unemployed, but unemployable” (2017).

References

2019 Edelman AI Survey (2019, March). Edelman. Retrieved from: https://www.edelman.com/sites/g/files/aatuss191/files/2019-03/2019_Edelman_AI_Survey_Whitepaper.pdf.

Artificial Intelligence. (1998). In K. Barber (Ed.), The Canadian Oxford Dictionary (p. 71). Toronto: Oxford University Press.

Artificial Intelligence. (2018). CIFAR Pan-Canadian Artificial Intelligence Strategy. Retrieved from https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy

Artificial Intelligence (AI). (2019) Techopedia. Retrieved from https://www.techopedia.com/definition/190/artificial-intelligence-ai.

Aspland, W. (2018, January 2). Welcome to the Machine Age: How #robocomms are changing communication. Communication World Magazine. Retrieved from https://cw.iabc.com/2018/01/02/welcome-to-the-machine-age-how-robocomms-are-changing-the-communication-game/.

Avent, R. (2016). The Wealth of Humans: Work, Power, and Status in the Twenty-first Century. New York: St. Martin’s Press.

Byford, S. (2019, January 31). How AI is changing photography. The Verge. Retrieved from https://www.theverge.com/2019/1/31/18203363/ai-artificial-intelligence-photography-google-photos-apple-huawei.

Callan, R. (2003). Artificial Intelligence. Hampshire: Palgrave Macmillan.

Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. New York: BasicBooks.

Curkpatrick, J. [no date]. AI and the 2019 trends. Cropley Communication. Retrieved from https://cropleycomms.com.au/ai_2019_trends.html.

Das, S. (2018, January 2). 5 Tech Trends that Will Rule Digital Platforms in 2018. Communication World Magazine. Retrieved from https://cw.iabc.com/2018/01/02/5-tech-trends-that-will-rule-digital-platforms-in-2018/.

Davidson, K. (2019, April 2). How AI Will Build Smarter Communication. Communication World Magazine. Retrieved from https://cw.iabc.com/2019/04/02/ai-stronger-comms/.

Diaz, J. (2019, January 28). A sign of the times: People are becoming more suspicious of robots. Fast Company. Retrieved from https://www.fastcompany.com/90297133/a-sign-of-the-times-people-are-becoming-more-suspicious-of-robots.

Dishman, L. (2019, February 4). Can chatbots make workers more ethical? Fast Company. Retrieved from https://www.fastcompany.com/90296695/can-chatbots-make-workers-more-ethical.

Evans, D. (2017, September 26). So, What’s the Real Difference Between AI and Automation? Medium. Retrieved from https://medium.com/@daveevansap/so-whats-the-real-difference-between-ai-and-automation-3c8bbf6b8f4b.

Galloway, C., & Swiatek, L. (2018). Public relations and artificial intelligence: It’s not (just) about robots. Public Relations Review 44, 734–740.

Google (Producer). (2010). Inside Google Translate. Available from https://www.youtube.com/watch?v=_GdSC1Z1Kzs.

Grammarly. (2019, March 23). Retrieved from https://www.grammarly.com/.

Gratton, L. (2017). Work and the rise of the machines. In D. Franklin (Editor), Megatech: Technology in 2050. New York: Profile Books.

Harari, Y.N. (2017, May 8). The meaning of life in a world without work. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/may/08/virtual-reality-religion-robots-sapiens-book

Hosanagar, K. (2019, March 13). We need an algorithmic bill of rights before algorithms do us wrong. Fast Company. Retrieved from https://www.fastcompany.com/90317658/we-need-an-algorithmic-bill-of-rights

[Image of A/B Testing on a subject line]. (2019, March 23). MailChimp. Retrieved from https://mailchimp.com/.

[Image of captions in a YouTube video]. (2019, March 29). Google Blog. Retrieved from https://youtube.googleblog.com/2017/02/one-billion-captioned-videos.html.

[Image of the Hemingway App]. (2019, March 23). Hemingway Editor. Retrieved from http://www.hemingwayapp.com/.

[Image of post boosting]. (2019, March 23). Facebook. Retrieved from https://www.facebook.com/.

IFTTT. (2019, March 23). Retrieved from https://ifttt.com/.

Kent, M.L., & Saffer, A.J. (2014). A Delphi study of the future of new technology research in public relations. Public Relations Review 40, 568–576.

Martin, N. (2019, February 8). Did A Robot Write This? How AI Is Impacting Journalism. Forbes. Retrieved from https://www.forbes.com/sites/nicolemartin1/2019/02/08/did-a-robot-write-this-how-ai-is-impacting-journalism/#75b597aa7795

Mills, T. (2018, Oct 12). AI In Business: Separating the Myths from the Facts. Forbes. Retrieved from https://www.forbes.com/sites/forbestechcouncil/2018/10/12/ai-in-business-separating-the-myths-from-the-facts/#7425a95b3f26.

Misra, T. (2019, January 24). Where Automation Will Displace the Most Workers. CityLab. Retrieved from https://www.citylab.com/equity/2019/01/automation-employment-technology-future-of-work-ai/581029/

Montréal Declaration for a Responsible Development of AI. (2017). Université de Montréal. Retrieved from https://www.montrealdeclaration-responsibleai.com/.

Naffi, N. (2019, March 24). In an AI era, lessons from dinosaurs help us adapt to the future of work. The Conversation. Retrieved from https://theconversation.com/in-an-ai-era-lessons-from-dinosaurs-help-us-adapt-to-the-future-of-work-113444.

O’Brien, J. (2019, March 19) 5 technology trends that are changing business communication. Communication World Magazine. Retrieved from https://cw.iabc.com/2019/03/19/5-technology-trends-that-are-changing-business-communication/.

Pangburn, DJ. (2019, January 28). How to lift the veil of hidden algorithms. Fast Company. Retrieved from https://www.fastcompany.com/90292210/transparency-government-software-algorithms.

Petrucci, A. (2018, April 20). How Artificial Intelligence Will Impact Corporate Communications. Forbes. Retrieved from https://www.forbes.com/sites/forbescommunicationscouncil/2018/04/20/how-artificial-intelligence-will-impact-corporate-communications/#5eb617061dc6.

Pross, G. (2019, January 25). 7 Takeaways From Recent Surveys About The State-Of-Artificial Intelligence (AI). Forbes. Retrieved from https://www.forbes.com/sites/gilpress/2019/01/25/7-takeaways-from-recent-surveys-about-the-state-of-artificial-intelligence-ai/#45ae470040ab

Robson, C. (2018, May 4). 13 ways you’re using AI in your daily life. The Keyword. Retrieved from https://www.blog.google/technology/ai/13-ways-youre-using-ai-your-daily-life/.

Sadagopan, S.M. (2019, February 4). Feedback loops and echo chambers: How algorithms amplify viewpoints. The Conversation. Retrieved from https://theconversation.com/feedback-loops-and-echo-chambers-how-algorithms-amplify-viewpoints-107935.

Sakata, T. (2018, April 24). The Good, The Bad and The Ugly of Artificial Intelligence and Machine Learning. Medium. Retrieved from https://medium.com/applied-innovation-exchange/the-good-the-bad-and-the-ugly-of-artificial-intelligence-and-machine-learning-3f7e663c317a.

Serebrin, J. (2019, January 28). 90,000-square-foot MILA AI institute opens in Mile-Ex. Montreal Gazette. Retrieved from https://montrealgazette.com/business/90000-square-foot-mila-ai-institute-opens-in-mile-ex.

Tangowork. (2019, March 29). Retrieved from https://tangowork.com/.

University of Toronto. (2019, March 25). Landmark $100-million gift to the University of Toronto from Gerald Schwartz and Heather Reisman will power Canadian innovation and help researchers explore the intersection of technology and society. Retrieved from https://www.utoronto.ca/news/landmark-100-million-gift-university-toronto-gerald-schwartz-and-heather-reisman-will-power

Winblad, A. (2017). Tech generations: the past as prologue. In D. Franklin (Editor), Megatech: Technology in 2050. New York: Profile Books.

Winston, P.A. (1992) Artificial Intelligence, 3rd ed. Massachusetts: Addison-Wesley.

Wooldridge, A. (2017). Megatech versus mega-inequality. In D. Franklin (Editor), Megatech: Technology in 2050. New York: Profile Books.

World Economic Forum. (2019). Centre for the Fourth Industrial Revolution Network for Global Technology Governance [Brochure]. Geneva.

--

--

Ali Abel

Co-owner, EH1 Design Company. Writing, research, editing and communications strategy. Certified Communications Professional.