Artificial intelligence is changing the way we work and is considered critical to success in the next five years by 94% of business leaders. The market is accelerating rapidly, and surveys show an increase in revenues from AI adoption: Organizations whose employees derive value from AI are 6 times more likely to get financial benefits.
Still, the enthusiasm AI generates should not hide the fact that some fears and concerns remain, preventing many companies from scaling AI initiatives. Among the concerns, many are people- and change-oriented, rather than technical. Fifty percent of people report a lack of executive commitment, 44% are missing training to support adoption, and 44% believe that AI solutions are too complex for end users.
Yet too many organizations continue to focus on purely technical questions and forget about the human side of AI. Technology itself doesn’t change society: It’s more its resignification and adoption by humans that bring impact. Studies show that companies seeing the highest bottom-line impact from AI exhibit overall organizational strength and top people-management practices. Also, if AI changes quickly, human psychology remains remarkably stable. It is, therefore, possible to anticipate people’s concerns in regard to AI, using the theory of psychological preoccupation, proposed by Gene Hall (Concerns based adoption model) and most recently strengthened by Céline Bareil.
Seven phases of preoccupation
Concerns, or preoccupations, are cognitive and emotional constructs that express a lived emotional reality and an unfulfilled state of mind. In any change, people experience different preoccupations, which are ordered in time according to an always-the-same hierarchy.
In a way, answering to the first level of concerns would make it possible to bring employees to the second level and to make them gradually accept change. Trying to answer to more advanced levels of concern without having answered to the previous ones will be counterproductive.
Organized into seven phases of progressive concerns, this theory, which remains relatively unknown, presents several advantages: (1) It’s proactive, focusing on preventive concerns, rather than on managing installed resistances, (2) It’s considering people as the heart of change, (3) The diagnosis of concerns is quick and easy, (4) Each phase calls for specific actions, (5) It perfectly represents AI-related concerns.
Faced with AI, people will go through seven phases, sources of concern, with different needs:
- The absence of concerns. People will not feel concerned, and won’t be aware of AI’s impact on their job: “I doubt that my work will be impacted by AI”, “It doesn’t concern me”, “AI hasn’t made great progress, there is no need to talk about it”. Overcoming this phase requires business leaders to jostle employees, convince them of the importance of change, and provide them with quantifiable and verifiable facts. Your goal must be to demonstrate that AI is increasingly impacting all industries and that everyone must jump in or risk being quickly ostracized from the movement.
- Egocentric concerns, which are centered on people themselves: “Will AI take my job?”, “Am I going to lose part of my autonomy?“, “How will AI impact my work and my daily habits?” To address such emotion-laden concerns, the priority is to listen, support, and reassure people, explaining AI’s real impact on their job. Bringing clarity about human and machine collaboration, and how each one will learn from the other, will foster acceptance and help progression and change.
- Concerns focused on the organization and its commitment: “Is AI a tool that will end up on the shelf and be underused?”, “Will the executive management be committed to AI implementation?”, “What will it change for the whole company?” In a word, people will wonder about management. Succeeding in this phase requires demonstrating the commitment of the company’s leaders, by defining a clear strategy with which senior management is fully aligned, linking AI initiatives to core business values, or tracking a comprehensive set of key performance indicators. Companies following these recommendations are 2.3 times more likely to be identified as effective.
- Concerns about the change, or AI itself: “What’s behind it?”, “What about data privacy?“, “What scientific basis is this tool based on?” Here, it’s useful to reassure people about the relevance and legitimacy of the AI being implemented. Explainability has actually been linked to AI trust and acceptance in numerous studies: Information provided to users reduces uncertainty and increases transparency. For example, in hiring, candidates perceive AI to be fairer if they were made aware of its impact in reducing human bias in the selection process. Concerns related to the process of moving AI from pilot to production might also emerge: “How are we progressing?” “Who does what?” You should communicate a detailed roadmap and define the roles of each party involved, from AI providers to employees.
- Concerns about experimentation and usage: From now on, people are open to learning and making efforts to integrate AI into their daily activities. They’ll wonder about their ability to use AI effectively (“Am I capable?”) but also about its use: “How can I perform this?”, “Where can I get this information I am looking for?“, “How can I better understand the recommendations made by AI?” People are entering a learning stage, and management must look for training opportunities or other types of support aimed at increasing users’ skills. Unfortunately, recent surveys show that only 21% of business leaders believe their company educates workers to apply and use AI effectively.
- Concerns about the collaboration with other users: “How does this team use AI? Maybe they can give us some tips?”, “With whom should we discuss?“, “We should probably talk to this team about AI; they might be interested”. When these conversations occur, AI is well-accepted and used. For people who do not necessarily seek to share with others and prefer to focus on their own use, concerns could stop during the previous phase. For those who want to go further, it will be useful to organize opportunities for collaboration with other users (and non-users) within the company, or with external stakeholders.
- Concerns about continuous improvement: “How can we go even further?“, “Are there any roundabout ways to use it for other needs?”, “It would be great to have this additional feature!” Questions are more exciting, innovation-oriented, and focus on doing better with the tool: AI is, therefore, fully accepted.
One question for a big impact
While more advanced diagnoses are possible, a single question is enough to identify what worries people the most: “What concerns you most right now regarding the implementation of AI?” The question may seem too simple to be effective, yet few people explicitly ask it. However, it allows you to carry out a relatively quick diagnosis of the person’s emotional state and to have a precise idea of the phase they are currently in. With this knowledge, you’ll be able to tailor specific actions at the right place in time to promote AI acceptance.
Even if executives and senior management are first in line in advancing AI-related change, let’s not forget about the need for each one involved to take individual adaptive initiatives to move forward and raise AI awareness. Everyone should therefore take stock of their biases and develop intellectual humility, as well as rethink their convictions. Getting answers to specific concerns is therefore not only the responsibility of corporate management, it is a collective responsibility upon all of us to put our egos aside, to have openness, to look for objective answers, and to resist the vulnerability of our own emotional reactions.