Each week, the Law.com Barometer newsletter, powered by the ALM Global Newsroom and Legalweek, brings you the trends, disruptions, and shifts our reporters and editors are tracking through coverage spanning every beat and region across the ALM Global Newsroom. The micro-topic coverage will not only help you navigate the changing legal landscape but also prepare you to discuss these shifts with thousands of legal leaders at Legalweek 2023, taking place March 20-23, 2023 in New York City. Registration is now open. Secure your ticket today. |
|
|
The Shift: AI Was Supposed to Help Improve Diversity. Now It’s Being Regulated for Bias As artificial intelligence tools became more advanced, they were initially lauded as a critical tool in leveling the playing field in areas such as hiring. “Hiring is a data-rich process.” explained Matt Spencer, co-founder and CEO of AI-powered recruiting network Suited. “What if [law firms] could use this data to solve their pipeline, diversity, and performance problems?” The prospect of incorporating AI into hiring processes unlocked new potential for allowing legal employers to more equitably consider candidates on a wider variety of factors to build more diverse, well-rounded talent pools. “While there are a number of benefits to introducing AI into your hiring process, the most invaluable one is the inevitability of hiring high-performing, diverse candidates that will add value to your firm,” Spencer said. At the same time, the notion that AI has a significant potential for bias was nothing new. As early as December 2021, New York City enacted Local Law 144, initially intended to take effect Jan. 1, 2023, and aimed at using AI tools known as “automated employment decision tools” (AEDTs) in hiring city employees. Specifically, the law sought to ban such tools unless they were annually audited for bias. This push and pull eventually led to a growing consensus that AI can both aid and inhibit efforts to be more inclusive in hiring, and that the focus needed to shift to understanding tools’ bases for decision-making. “[You have to] explain what went into the algorithms, what the machine was learning, what was the initial input, what were the goals?” said Loren Gesinsky, partner at Seyfarth Shaw, at a Minority Corporate Counsel Association conference session. “It’s not just about the impact, it could be about what led to the impact and that could be a mess.” |
|
|
Sponsored By ALM Returning March 20-23, Legalweek will feature four days of premier content tackling topics around data privacy, discovery innovation, investigations, the future of legalAI, and more, all while offering curated deep dives into the latest in legal technology. Join thousands of legal professionals in New York City and gain the tools you’ll need to get legal business done. Read More |
|
|
|
|
The Conversation Even proponents of using AI to increase DEI have been clear that it needs to be used under proper guidance. The ability to solve pipeline, diversity and performance problems “is what AI-enabled recruiting promises and delivers when built and used correctly,” Spencer noted. “Built and used correctly” has become the pivotal issue. The U.K. took the lead in building a framework to regulate AI in mid-2022, while U.S. and EU plans to do the same are still at a nascent stage. At the same time, the use of AI in recruiting and hiring was up 19% from the previous year, according to an October 2022 European Employer Survey Report by Littler Mendelson. Still, even though employers know that more regulation might be coming, few are taking steps to mitigate their compliance risks when it comes to using AI in hiring, partly because they don't understand whose responsibility it is to do so. “Companies use vendors and they don’t understand what the vendors’ algorithms are doing,” noted Mary Jane Wilson-Bilik, a partner at Eversheds Sutherland. “I’ve definitely run into that where the contracts that companies have with their vendors are very inadequate and don’t require the vendor to indemnify them if, in fact, the algorithm is biased.” The proposed New York City law was poised to be the first real attempt at significant regulation of AI in hiring practices, but the law was ultimately delayed and is now scheduled to go into effect on Apr. 14, 2023. However, although the law is not yet in effect, more companies are taking the prospect of AI bias regulation and compliance seriously, and the push toward more regulation of AI is gaining momentum. The Equal Employment Opportunity Commission (EEOC) has designated investigations of potential discrimination in the use of AI in recruiting, screening, hiring and promoting employees a priority of the agency’s Strategic Enforcement Plan for 2023 through 2027. At an EEOC listening session not long after the plan was announced, Emily Martin of the National Women’s Law Center expressed fear about the ability of AI hiring technology to “replicate and systemize harmful and stereotyped decision-making, while also making such discrimination more difficult to challenge because of the black box nature of those decision-making processes.” The EEOC has continued to provide clarity on the propriety of using AI in hiring in recent months. In January 2023, the agency hosted a panel titled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier,” at which experts from the legal industry, tech and academia discussed their opinions on whether AI is working the way it should in HR processes. The panelists expressed doubt about the ability of AI tools to identify the best job candidates without violating anti-discrimination laws, citing, in particular, the fact that most machine learning systems are trained on data sets that contain large amounts of information rooted in institutional biases based on race, gender, sexual orientation and more. “The employer doesn’t have to explicitly state a discriminatory preference. The software might simply learn those preferences by observing its past hiring decisions,” said Pauline Kim, Daniel Noyes Kirby professor of Law at Washington University School of Law in St. Louis. “Even employers who have no discriminatory intent could inadvertently rely on an AI tool that is systematically biased. So these automated systems truly do represent a new frontier in civil rights.” |
|
|
The Significance The power of AI tools has grown exponentially in recent months since the release of more advanced large language models and generative AI tools. Absent federal regulation, liability for AI bias will only continue to grow as more AI tools become available. On whom that liability will ultimately fall is likewise an increasingly complicated question. Current proposed regulations do not put the onus for biased AI outcomes on the developers. “It does create liability for employers if their use of automated employment decision tools leads to bias, but, notably absent from that statute is liability to providers,” noted Loren Gesinsky, a labor and employment lawyer at Seyfarth Shaw. “The liability as written in the statute is all on the employer. So generally to my awareness, there’s not right now a regime of statutory liability for developers, providers of AI.” In response, more and more companies are looking to contractually hold their AI providers liable and require indemnification in the event that lawsuits arise out of biased outcomes. “A lot of that does take place in contracting back and forth when you’re trying to put these things together to make sure that hopefully, the person that knows the most about how this model was done, can come up and defend it and be part of the process,” said Mark Lyon, partner in Gibson Dunn’s Artificial Intelligence and Automated Systems practice group. He added, “I think we will be seeing more.” The way the regulatory sea is shifting, AI companies who might have previously operated unchecked, not conducting bias audits and assessments on the front end, might soon find themselves contractually obligated to do so. Gesinsky further noted two additional areas in contracting around AI bias that might become trends: requiring representations regarding bias audits in AI tools, and not simply providing for indemnity; and requiring those audits to be conducted by an “independent” third-party auditor, as is required in the proposed New York City law. To be sure, some tech companies are already checking for AI biases on the front end, while others have yet to catch up. “Some are, but some are really nervous about it. That’s why sometimes companies should do it under privilege,” Wilson-Bilik said. “That’s one of the things that is happening slowly. We’ve done that for a number of companies. That is another evolving trend.” Currently, the U.S. and the EU have agreed to collaborate on AI research, and they are increasingly looking to each other on how to regulate AI. That said, experts don’t expect the U.S.’s approach to be as intensive or as granular as the EU’s, or even Canada’s. “Some of the AI policies on both sides of the Atlantic will necessarily reflect the existing ways that each jurisdiction handles this,” explained Edward Machin, London-based associate in the data, privacy and cybersecurity group at Ropes & Gray. “So the EU goes quite detailed, prescriptive, not necessarily set to specific regulation, whereas the U.S. is a little bit more, just by nature of how its legal system is set up, light touch, sector-specific and not as, frankly, onerous as the EU.” The Information Want to know more? Here's what we've discovered in the ALM Global Newsroom: US, EU Agree It’s Time to Regulate AI. Will They Agree on How? Absent Federal Regulation, AI Bias Liability Is Growing—And Getting More ComplicatedAs AI's Role in Hiring Grows, Focus Turns to Understanding Its Decision-Making Experts Seek Clarity From EEOC on Handling AI Bias in HR ProcessesInside Track: HR's Embrace of AI Will Keep Legal Departments Busy The Strategic Advantages of Using AI in Legal HiringArtificial Intelligence Law and Policy Roundup NYC Local Law 144 and the New Frontier of Algorithmic Decision-Making in the Workplace: What Employers Need To Know as They Prepare for 2023 and Beyond The Forecast While the groundwork for what AI bias regulation will ultimately look like is still a bit shaky, the fact that regulation is indeed coming seems secure. AI continues to advance, and companies are continuing to find more ways to incorporate it into workflows for all aspects of doing business. When done properly, hiring and other HR functions have proven to be great uses of these technologies, and the results have the potential to move the needle on DEI efforts. Monitoring for bias in the use of these helpful tools will be critical going forward. Expect to see even more tools come out that can play a critical role in improving diversity and helping the greater good in general–but even more scrutiny and regulation of those tools, as well. |
|
|
|