close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

The use of data scientists by law firms grows alongside AI challenges
aecifo

The use of data scientists by law firms grows alongside AI challenges

Several large law firms are turning to specialists to strengthen their artificial intelligence compliance practices, something they would not do in more established areas of law.

They are hiring data scientists and technologists to test client systems for bias, ensure compliance with emerging regulations, and rethink their own legal offerings, which themselves can be improved through the use of AI.

This emerging field, which has consumed popular imagination for the often realistic behavior of AI, also gives rise to potential legal issues.

“Legal and technology issues are inextricably linked, and we believe that more than five years ago, when we launched our firm, to truly become an AI firm required legal and IT understanding,” said Danny Tobey, Partner and Co-Global Head. -chair of the AI ​​and data analytics practice at DLA Piper.

Unlike other areas of law such as environmental regulation or automobile safety, where legal experts routinely handle complex details, AI poses unique challenges that require the expertise of technologists, Tobey said.

“AI is unique because we are not only talking about an incredibly complex and new technology that is developing every day, but at the same time we are remaking the infrastructure of how we practice law,” Tobey said in an interview. “A true AI practice combines legal and IT skills. »

DLA Piper is one of many multinational companies employing this strategy. Faegre Drinker has a subsidiary called Tritura that employs data scientists to advise clients on the use of AI, machine learning and other algorithm-driven technologies, according to its website. DLA Piper, which employs 23 data scientists, confirmed it hired 10 data scientists from Faegre Drinker last year.

Faegre Drinker did not respond to emails seeking comment.

Others employ technologists to integrate AI into their own practices.

A&O Shearman announced last year that it had launched an AI tool called Harvey, built using OpenAI’s ChatGPT platform, which could “automate and improve various aspects of legal work, such as contract analysis , due diligence, litigation and regulatory compliance.”

Clifford Chance said in February that it had deployed an internal AI tool called Clifford Chance Assist, developed on Microsoft’s Azure OpenAI platform. The tool would be used to automate routine tasks and improve productivity, the company said.

“Teams of legal technologists across the United States and around the world are brainstorming automation and AI solutions that could benefit us as legal professionals,” said Inna Jackson, Americas Technology and Innovation Attorney. at Clifford Chance, in an interview.

Red team and governance

To help its clients determine whether their AI models are operating within regulations and laws, DLA Piper regularly uses so-called red teaming, a practice in which managers simulate attacks against physical or digital systems to see how they would work.

“We are working with a major retailer to test various facial recognition solutions to ensure that they not only deliver on their technical promises, but also comply with the law and the latest statements from federal agencies and related legislation. to AI. » said Tobey.

He noted that companies are also rapidly integrating AI into human resources, “from hiring to promotion to termination.”

“This is an incredibly regulated and fraught area that increases the risk of algorithmic bias and discrimination,” he said.

Customers large and small are looking for proper controls, Jackson said.

Large customers “want to figure out what is the right governance model to use for deploying AI, building AI, partnering for AI,” Jackson said in an interview. While smaller clients are likely building governance practices from scratch, she said.

“And by governance, I mean the processes, the controls, the thinking about the laws and regulations that may apply, the best practices that may apply,” Jackson said. “So everyone is thinking about the best ways to approach AI.”

DLA Piper and Clifford Chance were among 280 chosen to participate in the Artificial Intelligence Safety Institute Consortium, part of the National Institute of Standards and Technology.

The goal is to develop “science-based and empirical guidelines and standards for AI measurement and policy, laying the foundation for AI security globally,” according to the AI Safety Institute.

Although Congress has yet to pass broad legislation covering the use of AI, the European Union’s AI law, which took effect in August, would be applicable to multinational companies that deploy AI systems. ‘AI, if these systems are used to make decisions that affect EU citizens, Clifford Chance said in a notice to customers.

The European law, which prohibits discrimination and bias, “will have a significant impact on employers and human resources professionals who use or plan to use AI systems in their operations, recruitment, performance evaluation, their talent management and workforce monitoring,” said Clifford Chance. .

“Clients with a global presence in particular want to know how to think about the applicability of EU AI law to their operations, not only in the EU, but perhaps also widely outside the EU,” Jackson said. Clients are looking for guidance on creating a set of practices that would be acceptable across jurisdictions “because a market-segmented approach obviously would not be practical,” Jackson said.

Companies are also trying to determine what AI guardrails will be put in place in the United States, said Tony Samp, head of AI policy at DLA Piper.

“With every company our data analysts, red teams and lawyers work with, there is a parallel need for them to understand the AI ​​regulatory landscape in Washington DC and the direction of Congressional interest,” Samp said in an email.

Samp previously served as a senior adviser to Sen. Martin Heinrich, D-N.M., one of four lawmakers tapped by Senate Majority Leader Charles E. Schumer to write a report on AI innovation and regulation.

Samp said the law firm recently hired former Sen. Richard M. Burr of North Carolina, a Republican who chaired the Intelligence Committee, to advise clients on the direction U.S. legislation might take on AI.