AI will force us all to adapt. Here’s how to create the policies, training opportunities and legal safeguards to do it right.
Key takeaways:
- Roughly one-third of HR/TA leaders and job seekers say they’re concerned AI will require them to reskill, upskill or find an entirely new occupation.
- Experts suggest being proactive about writing responsible AI policies and training employees in useful applications of AI.
- It’s crucial to ensure that all employees are aware of the legal risks associated with AI, from bias and security concerns to intellectual property rights.
As businesses around the world race to adopt artificial intelligence (AI) tools, both HR/TA leaders and job seekers are worried about having the skills required to keep up, according to Indeed’s recent Global AI Survey.
The survey, which includes responses from more than 7,000 HR/TA leaders and job seekers, finds that roughly one-third of respondents from both groups are concerned that AI will require them to reskill, upskill or find an entirely new occupation. This is despite the fact that 54% of HR/TA leaders and 45% of job seekers say they are hopeful about the potential impact of AI at work.
That tension between concern and optimism presents an opportunity for business leaders to help their teams adopt AI responsibly. It’s not enough for companies to assume their employees are learning these skills on their own, says Lori Aiken, Head of Talent for Indeed. Leaders need to be intentional about training and showing how AI can be useful. “We know it’s here, and here to stay,” Aiken says. “Don’t be afraid of it. Just embrace it and figure out how to start leveraging it.”
Here’s a look at how you can craft the policies, the training and the legal frameworks to prepare your team for the AI future.
Put guiding responsible AI principles in place
It’s necessary to establish guiding principles on how to use AI responsibly before teams get too steeped in, says Matt Donovan, Chief Learning and Innovation Officer at GP Strategies, a global talent transformation provider. “AI is a tool. It is neither good nor evil,” he says. “These guidelines help us make better decisions on the use and application of it.”
Your company’s AI guidelines don’t have to be overly detailed or prescriptive. GP Strategies’ six-point list includes items like ‘maintain privacy and security’, ‘ensure safety and well-being’ and ‘drive accountability’. Donovan emphasises that employees at all levels, not just top executives, played a role in drafting these principles to ensure they represent a broad array of perspectives. “It can’t be top-down only,” he says.
Designing your policies this way leaves room for new applications of AI to emerge, as long as they live up to the original principles. That kind of flexibility is key, Aiken says. She remembers similar environments around the emergence of the internet and social media in the workplace. Companies that initially tried to prohibit the use of that technology at work eventually grew to see its enormous upsides. “I would caution anyone against being too restrictive,” Aiken says. “Over time, there will be a population of humans on Earth that will really only know AI as a partner in their day-to-day.”
Be purposeful about AI training and experimentation
Enthusiasm about AI may vary dramatically within your organisation. A recent Cisco survey found that while 82% of board members and C-suite executives are highly or moderately receptive to AI, just 69% of employees said the same.
That makes it essential to demonstrate how employees can use AI to their advantage. Donovan suggests setting up an ‘innovation kitchen’, where employees can be trained in new tools and even conduct A/B tests to see which ones perform which tasks best.
Indeed has created its own voluntary AI training program, in which people can test out real AI use cases. “We want people to know that if you’re being asked to create a deck or go research something, there’s a lot of different ways that AI can really shorten those efforts,” Aiken says.
Of course, AI isn’t perfect, so it’s important to educate employees on what to watch out for. (Microsoft itself acknowledges that its AI-powered productivity tool, Copilot, is often ‘usefully wrong’, requiring human tweaks to a prompt to yield the best results.) “Make sure people understand: It’s not a point and click, and then you’re done,” Aiken says. “It still needs some level of human oversight and input.”
Make the risks of AI clear
When teams do start experimenting with using AI, leaders should be sure to inform every employee, regardless of level, of the legal implications and risks involved.
In the hiring process, for example, AI can exacerbate biases in ways that may violate anti-discrimination laws, says Alex King, Indeed’s Vice President, Legal Counsel. “Even if you have no intent to exclude underrepresented groups, you can still end up having created disparate impact through your use of the AI tool,” she says. “There’s liability there, and probably even more significant for your organisation, you’re going to end up with a lack of diversity.” In a recent survey of Australian companies, 23% reported that they used automated employment decision tools in their recruitment process. Australian organisations using AI are being encouraged to mitigate the inherent bias risks of AI and ensure it’s fair, ethical and effective.
Uploading confidential data into off-the-shelf AI tools like ChatGPT can be a privacy and security risk. Unless you have an enterprise agreement with OpenAI, for example, with a strong privacy policy in place, King says there’s no guarantee that your company’s data won’t be used to train another business’s AI models. Even after you secure an enterprise agreement, King says it’s a good idea to keep track of who is using the AI tool and for what purpose to keep costs from ballooning. “You want to make sure that it’s going toward something that’s going to be good for the enterprise as a whole.”
Employees should also be aware that the law is still evolving in regard to who owns the intellectual property AI produces. That means your team should think twice before using AI to develop anything requiring legal protection, like a patent. “I’d want to know that in the beginning, so that I understand what kind of ownership I could have later,” King says.
Ultimately, putting in the upfront work to prepare your team for the AI future will all be worth it. Indeed Hiring Lab’s own research finds a rapid rise and long-term upward trend in Australian job postings referring to generative AI, which is an indicator that, over time, virtually every job will be exposed to generative AI-driven change. But that doesn’t have to be a threat if your team is well prepared, Donovan says. “You will not lose your job to AI,” he says. “You will lose your job to another person who uses AI better.”
Listen to Indeed CEO Chris Hyams interview AI expert and former F/A-18 Hornet pilot Dr. Missy Cummings on the Indeed podcast, ‘Here to Help’.