BrightHR’s CTO Alastair Brown on AI in the workplace: how to adopt it ‘correctly’

BrightHR

Australia, 19 September – If you’ve seen a news headline recently, or ventured online at all, you’ll know that artificial intelligence is here to stay.

It’s certainly not a new thing. For years now, AI has been removing inefficiencies and helping teams become more effective and you might not have even realised the part it’s been playing. Whether that’s from automating customer interaction, reducing the burden of repeatable tasks, forecasting sales, the list goes on…

But recently, it’s been thrust into the limelight, and this has prompted a series of questions from employers and employees alike on just how this technology is going to shape the future of work.

BrightHR’s Chief Technology Officer Alastair Brown believes AI is something to be embraced in the world of work, and its many successes have already been highlighted by its use within the Finance, Marketing, IT, and Customer Service sectors. However, Alastair stresses, “AI must be adopted correctly.”

We sat down with Alastair to answer some burning questions employers have on this topic.

How is AI impacting the workplace?

“The National Science and Technology Council’s recent report stated that the steep expansion in ChatGPT shows how the sheer potential of generative AI technologies is difficult to predict on any timeline. But one way to model future possibilities is by focusing on ‘impact spaces rather than specific opportunities.

“An earlier report by the World Economic Forum estimated the amount of work done by machines will increase from 29 per cent (in 2017) to more than 50 per cent by 2025, but that this shift will be accompanied by new labour market demands which will result in more jobs.

“So, the simple answer here is that we don’t actually have a firm answer on what the extent of the changes that AI will have on the workplace will be. Its rapid rate of development has taken much of the world by surprise and with technology evolving exponentially, it’s going to be a challenge to stay on top of it all. But to stay compliant, productive, and competitive, we need to rise to that challenge.”

So, does AI mean that we will lose jobs?

“There’s a train of thought that the widespread adoption of AI will have negative consequences on some types of roles, in that machines will replace humans. But don’t forget: the Industrial Revolution might have introduced machines to replace a proportion of the manual labour force, just as AI replaces some menial processing tasks.

However, more people were needed who could operate those machines, and many of those labourers who were replaced, were re-dispersed – fast forward to today and we have a reasonably healthy labour market, rather than millions of people displaced by machines and unemployed. So, it’s not a case of having to protect jobs necessarily – rather how to utilise the technology to enhance them.”

Should my workplace be utilising AI?

“We recently issued a survey at BrightHR asking clients about their AI usage—and more than a third reported to be using it. Of that group, the top three functions it was used in were admin (40%), creative writing (35%) and internal communications (22%).

“My advice would be that if you felt there was a significant benefit, use it, but in moderation as an assistance tool rather than an end-to-end work tool. And don’t forget: mass adoption of widely available AI carries significant risks. It can be useful and beneficial, but there are often pitfalls.

“It would be easy to become dependent on something like ChatGPT, which would in time erode skills or practice of writing. Use like you would use any research tool by making sure you understand it and adjust where appropriate. Ownership is really important: make sure you always own your work. Use the AI but make sure it’s not using you.”

What risks do I need to be aware of?

“BrightHR’s survey data shows that around three in 10 employers bill security risk as their key concern when using AI. Inputting company data in AI platforms will usually be a big no-no and will breach your company’s privacy policy. Samsung staff found this out the hard way earlier this year.

“Beware of breaching copyright too. After all, you don’t know where on the internet the information is being pulled from.

“The second biggest concern was the margin of error as cited by one in five employers. Using information created by AI will likely contain some errors, so make sure you understand it and make sure everything produced or refined using these tools is true and thoroughly checked to be factual and well-researched.

“Here’s an interesting case where we see these pitfalls in play: in New York earlier this year, an experienced attorney of 30 years, Steven A Schwartz, used AI to conduct legal research in a case where a man was suing an airline. It was discovered that six of the cases that had been cited to persuade a judge to move the case forward were in fact completely bogus… ChatGPT had made them up.

“If you’re thinking of using external AI platforms to provide clients with legal expertise or advice, think again. There are too many risks and not enough controls.

“Advice-centred AI-powered tools are great for saving time and stress – take our own tool Bright Lightning for example. But as is the case with our Lightning tool, all legal information should still be sourced through qualified professional experts.”

How can I reduce these risks and stay on the ball when it comes to AI?

“Firstly, assess your tech providers. With any technology you adopt within your organisation, you must carry out due diligence in terms of data protection protocols – both from the perspectives of data management to security. You can’t afford to get this wrong.

“Consider your internal security and protection from cyber threats. Like with any new tool, website, or provider you’re bringing into your organisation, it’s important to have robust policies and processes in place to ensure not just “any” tool is being accessed within your business network.

“Next, understand that legal expertise still should be people-driven. There are still many inaccuracies in public chat tools when it comes to providing legal advice in areas like employment law. Be mindful of this and the source of information. Remember: AI-powered advice is great for speed, but the information must receive validated checks by qualified professionals.

“Employers will benefit from carrying out regular assessments of their work processes to identify whether AI could be utilised for efficiency and productivity. Where there is an opportunity though, you’ll need to assess and control any associated risks.

“In all scenarios, pay attention to the areas and industries which are being disrupted, be alert of the changes and what’s coming, train where your discipline is vulnerable, and be aware of how your job may be affected.

“Reading the tech press is by far the best way to stay informed. But be sure to use trusted, reputable sources known for reliable coverage, disregard speculation and sensationalism, especially on social media, and seek to understand what’s changing.”

What should an AI policy entail?

“It’s best practice, given that AI is certainly not going away, to introduce an “AI in the workplace” policy. It should give consideration to the ethical and legal implications of using such technologies, and clearly outline the roles and responsibilities of all employees when it comes to working with AI systems, including guidelines for data privacy and security.

“Additionally, the policy should address the potential impact on job roles and the need for retraining or upskilling. It is also important to establish a process for monitoring and auditing the use of AI systems to ensure they are being used ethically and fairly. Lastly, the policy should address any potential biases in the AI systems being used and establish a plan for addressing and mitigating these biases.”

So, what’s the take-home message?

“The point is – AI, like many technologies, isn’t going anywhere. And though many are uncertain about how it will impact the world of work, employers should indeed embrace it, but they must:

  • be responsible,
  • use it ethically,
  • exercise caution,
  • protect sensitive information,
  • steer clear of taking results on face value, instead making sure any information is checked by experts.”

/Public Release.