Read

AI and Workplace Equity: Why It Matters, Why Now?

Investor Perspective: AI for High Performance, But Leverage and Enhance Disclosure

  Investors are increasingly looking at human capital management (HCM) as a determining factor in achieving superior performance. In discussions around the “future of work,” we acknowledge that the future is here—the workforce is now more mobile, more diverse, more agile, and increasingly shaped by more data and analytics.
  The COVID-19 pandemic has accelerated the digital transformation of businesses—from resume screening and AI video interviews, to virtual onboarding and digital workplace interactions. Employee activity in a mixed work environment presents a huge opportunity for technology to improve efficiency, but not all companies have clearly articulated when and how to employ relevant technology analytics on their employees. In a recent episode of the BBC, artificial intelligence technology designed for recruitment was used to set layoff targets, with unintended psychological effects on employees and possible impact on employer (company) brand and financial health. A series of interviews with tech leaders in the documentary Social Dilemma (shooting in 2020) highlights the dangers of gathering information in the workplace, and shows how related behaviours can affect people’s minds by causing anxiety and depression.
  Share Action, a UK-based NGO, established the Workforce Disclosure Initiative (WDI) at the end of 2016 to improve transparency and accountability of businesses on workforce issues. WDI is supported by 68 institutions, including HSBC Asset Management, with assets under management exceeding US$10 trillion. In 2021, WDI introduced a new metric that requires companies to describe the monitoring measures they use to monitor employees, and how they ensure that this does not disproportionately affect employees’ privacy rights.

  Beyond surveillance, investors also recognize that it is generally difficult to reduce human behavior to an algorithm. For AI to work well as a tool, decision rules need to be quantifiable, and computers need to be able to interact with massive amounts of data almost instantaneously. In contrast, artificial intelligence cannot function as a tool if it requires context (or context) to understand human characteristics, and if the impact on human behavior is subtle. (Christine Chow)
Scholar’s Perspective: AI Can Be Adopted in Contexts Where AI Shows Less Decision Biases Than Humans

  A scientific approach to assessing AI decisions (effectiveness) in human capital management (HCM) is to assess the degree of bias in algorithmic decisions. AI has been proposed to eliminate human biases in areas such as hiring, promotions and compensation decisions, so (in order not to lose sight of the original intention) it must be evaluated to ensure that the use of AI weakens rather than reinforces human cognitive biases. As research in the field of artificial intelligence continues to advance, and conclusions about the suitability of specific types of AI techniques in human capital management (HCM) are clear, we must always be vigilant in the way we judge AI, as research shows that we (always) pass “Best” decision-making to judge artificial intelligence by comparison, which is not appropriate. This may (justly) explain the backlash against AI adoption. The reality (in fact we are facing) is that the adoption of AI will replace human decision-making (rather than optimal, perfect decision-making), which is far from perfect and often full of biases. Therefore, a better way to assess the applicability of AI technologies in human capital management (HCM) is to directly compare AI and human decision-making outcomes. AI can be adopted in settings where AI exhibits less decision bias than humans.
  The capabilities of AI are considered to fall into two broad categories: weak AI and strong AI. Weak AI means that it is specific to one task, while strong AI means that it mimics human intelligence and can generalize to other environments. AI currently performs well in specific contexts, but has limited generalizability. Therefore, for human capital management (HCM), AI validated in context-specific datasets should be used with caution. Care must be taken when considering the application of AI capabilities in human capital management (HCM), especially as many of the proposed uses of AI reflect the need for strong AI. In this way, companies should set their expectations for what AI can do reliably.
  Since the work environment is changing rapidly, we recommend using dynamic rather than static AI – algorithms should be regularly updated with new data to prevent decisions that are outdated, irrelevant, or even no longer fit for purpose. (Paris Will)
Legal and Regulatory Perspectives: Understanding the Impact of Legal and Regulatory Impacts on the Introduction of AI Systems and Processes

  It is certainly in the interest of all involved – especially those who may not be aware that they should care about this things people. Failure to understand and address these legal and regulatory implications can have serious consequences for some stakeholders.
  A problem in trying to understand these impacts is that there are few laws and regulations in the world that specifically apply to AI applications in the field of human capital management (HCM). There are forward-looking laws and regulations in different jurisdictions that are designed to specifically address relevant issues (applications of AI in the workplace), most notably (for example) a very important regulation proposed by the European Union – which not only In the EU, will also have utility outside the EU. However, in many countries, employers (how) to apply AI in the workplace have a set of existing laws and regulations that apply.
  Unfortunately, when, how, and why these systems of laws and regulations apply, and what they mean by them, are not immediately clear. What is clear, however, is that there is growing concern and resistance to the use of artificial intelligence, automated decision-making and related processes in and outside the workplace, and regulatory and legal challenges are looming.
  We try to address this issue: first, calling on the board to consider (layout) accountability and governance; second, reminding when, how and why laws and regulations apply, or may apply; in addition, due attention is required to the following situations:
  · Now specifically A range of laws and regulations that apply to the adoption and use of AI in the workplace;
  Potential impact of the EU AI Law proposal;
  Some existing countries that are most likely to apply to AI applications in the field of human capital management (HCM) Laws and regulations, but there must be judicial interpretation and application.

  Along the way, we hope that all those involved (and those who should be paying attention) (and those who should be paying attention) (with regard to AI in HCM) can better understand, plan and manage the response to the (new) governance approaches, laws and regulations, reputational impacts, risks and exposures, etc. that may arise after the introduction and deployment of AI systems and processes. (Mark Lewis)
What works: Companies should take it cautiously and prepare from the top down

  AI analytics are unlikely to disappear from work now and in the future. Therefore, for better and more appropriate use of AI in human capital management (HCM), we need to embrace it with care and respect for employees:
  Board-level oversight should focus on articulating the intent of AI applications; establishing clear AI training and testing processes, including back-testing; ensuring transparency in the use of AI applications and accountability for their results; and requiring effective and thorough due diligence on third-party products in the supply chain that may use AI. This due diligence is currently often missing or performed by people who lack sufficient expertise;
  Companies should understand the legal and regulatory implications of adopting and relying on AI systems and related processes in the workplace, and manage them properly;
  Companies should build a machine Learning platform to create automated and repeatable data preparation processes and feature engineering, ensuring consistency over time. The platform should have data versioning capabilities to track changes in used data, different model experiments, and test results. There should be a feedback loop and associated documentation that integrates results and lessons learned to improve performance over time;
  The company should define who is responsible and have a set of corrective methods and clear remedial mechanisms in place to ensure that under different circumstances All are held accountable;
  · When dealing with companies, investors can request disclosure and interpretation of the key performance indicators (KPIs) highlighted below, which include recruitment, culture and performance (Figure 1).

  The author, Dr Christine Zhou, is the Global Head of Management and Board Member of HSBC Asset Management. She has 25 years of experience in investment management, research and consulting, with a focus on technology and sustainability. She is also a board member of the International Corporate Governance Network and was appointed honorary advisor to the Hong Kong Financial Reporting Council in 2021. Paris Will is the lead corporate research advisor for the London School of Economics’ Inclusion Initiative. Mark Lewis is currently a Senior Advisor at the UK law firm Macfarlanes LLP and a Visiting Professor at the London School of Economics and Law, with 30 years of experience.

error: Content is protected !!