According to LinkedIn, AI literacy is the new, must-have leadership skill. Apparently, there is a surge of those in leadership roles listing prompt engineering and use of Generative AI tools to their profiles.
As ever, LinkedIn is less forthcoming on what this means other than it is A GOOD THING!
Whether this represents the latest claims by the career hungry and ambitious based on having once asked ChatGPT a question, or something more meaningful, we don't know. But given the hype and panic about AI, I can imagine many in leadership roles fearing for their future careers and being desperate to appear at one with the zeitgeist. To be a Gen AI luddite is unacceptable in a world seduced by the potential productivity power of AI.
Now, I should say I am a fan of Generative AI and its potential to reframe work. Not being hungry for advancement and promotion, I have not updated my profile. I am too busy building AI tools to assist learning and helping others to demystify what seems like a scary/exciting/confusing (delete as appropriate) change to their working environment.
I am cautious of these claims. I do think that leaders need to be abreast of Generative AI development. They should be working with their teams to identify use cases, experiment and build any valuable outcomes from those experiments into their working practice and that of their teams.
Note that I use the phrase "working with their teams". Many technology enhancements to the work have floundered because:
a) No one knows what to do with it or what it can do, but they know they should use it for fear of being left behind.
b) How to and when to use it is imposed on those expecting to deploy the tools, without appropriate involvement and consultation.
Often this is because we choose to introduce the tools with a clear link to a task or activity which needs undertaking. To spend time building an awareness of the potential of the tools and how they could be used is considered an unaffordable luxury. Instead, we provide a step-by-step guide and anticipate that oft vaunted productivity gains will simply follow.
Unfortunately, introducing new technology and ways of working to a group without a clear understanding of potential, direction of travel, and consequences creates a cadre of saboteurs. They've read the horror stories about whole swathes of workers being surplus to requirements in this brave new world, and – not unreasonably – they are protective of their role and future status as employees.
LinkedIn provides evidence that there is a much lower level of engagement with Generative AI in those without leadership positions. They note that leaders are 20% more likely to add AI skills to their profile than the rest of the workforce. Against this backdrop, I worry that AI imposition is more likely than a more considered adoption of these tools. In L&D, we have a role to play in bridging this gap by providing a more general introduction to the potential for responsibly used AI amongst those for whom it is currently 'something those tech-bods do'.
Furthermore, C-suites are potentially to blame. While senior teams apparently rank AI skills as a top 3 skill set for executives, in 4 out of 10 cases, they also believe that their leadership team (presumably without those skills) act as a barrier, slowing the adoption of AI in their organisations. This may explain the earlier claims made by those polishing their CV. This potential for advancement seems more of an incentive than a genuine enthusiasm for, and increase of skills with, Generative AI.
What the inhabitants of the corner offices in the world's corporate HQs want to do with AI is not explained in LinkedIn's announcements. Given that many of them have someone to open their emails for them, I suspect their engagement with the nitty gritty of any technological breakthrough is patchy at best. How many of them have imbibed the claims by the world's consultancies and think-tanks about cost savings, reduced staffing and higher productivity per individual? How many of them have considered AI through the lens of their workforce?
This is another role for L&D and the wider HR community. Assessing readiness to engage with AI and to change workflows and work practices is an essential first step in introducing a mature strategic pivot in the way tasks are executed. The wider context for AI adoption and use should be central to the work of those in people teams. If it is left to the technology team, we will have a techno-centric strategy – one which potentially fails to address considerations about people, confidentiality, privacy, IP protection, copyright or any of the other concerns which are rightly raised in relation to AI.
Final thoughts
AI is a potential force for good. It is not, however, without its challenges. While I endorse LinkedIn's drive to ensure a wider understanding of Generative AI, the potential for people doing things differently and doing different – and more valuable – things, can only be addressed alongside considerations about governance and the long-term implications for the changes we seek to make.
To do anything other is to repeat many of the mistakes made when other technologies have been introduced – from the Spinning Jenny onwards.
Postscript – AI's take on the issue
I thought I'd check my analysis by asking Claude AI to give its take on the LinkedIn statements (note to self: must add prompt engineering to my LinkedIn profile). It added some interesting insights:
Claude also made some recommendations:
It looks like we're going to be busy in L&D!
Article originally published on TrainingZone.