Technology roles are changing all the time. But it’s not just the scope, remit and purview of Information Technology (IT) jobs that are in constant flux, it is the designation and nature of the roles themselves. The newest tech job function that may now be coming to prominence is the AI-legal engineer.
We already have data governance professionals, data regulation consultants, privacy gurus, risk management technicians and all manner of General Data Protection Regulation (GDPR) specialists, but it is the wide-ranging impact that Artificial Intelligence (AI) is having on modern workplaces that is creating this next tech role.
Law abiding AI apps
The use of AI and complex algorithms for decision making (self-driving cars, mortgage and credit decisions, criminal justice, immigration etc.) is creating new legal challenges. Because of this, the laws and regulations that govern us will need to adapt to a software algorithm driven society. The outcome of this reality is a new requirement for our applications to adhere to domestic and international laws.
Data management AI platform company Immuta is one of the first organizations to formalize the role of the legal engineer. Yale law graduate and former FBI specialist Andrew Burt is Immuta’s chief privacy officer and legal engineer.
“As we automate more and more of our activities through AI and other means, embedding legal interpretation directly into software systems is critical. That’s exactly what our legal engineering team at Immuta is focused on achieving,” said Burt.
“Without legal engineering, compliance efforts simply cannot scale. If compliance can’t scale, we’ll be forced to choose between adopting new technologies that don’t follow our laws, or abandoning those new technologies altogether. But that’s not a choice we should have to make.”
Since helping to establish this position at the company Burt has been working to build a team of others — coders with legal backgrounds (from academia, practicing attorneys etc.).
The team is now tackling the ethical challenges of AI by embedding the laws and regulations governing AI and society directly into software so these regulations can become machine executable.
A member of Immuta’s UK team and now serving as the firm’s senior privacy counsel and legal engineer, Sophie Stalla-Bourdillon spent the past decade as a professor of information technology law and data governance at the University of Southampton.
Stalla-Bourdillon explains that she sees examining risk management frameworks and embedding aspects of these frameworks within the Immuta platform. She also works out how to implement agreed best practices on reducing bias and risk in machine learning as she continues to break down the EU’s GDPR requirements. For her, it’s all about framing these practices into digestible, easy-to-scale methods to help customers control risk across their data science programs.
“GPDR sets new standards for the creation and implementation of AI. Legal engineering is the most effective way to make sure best practice is followed as early as possible,” said Stalla-Bourdillon.
Will AI run the world?
With AI getting smarter all the time and the prospect of machine-driven systems now making decisions for us on a daily basis, this subject has of course come to increasing prominence. Worries over the ‘rise of the robots’ and the likelihood of some massive AI brain gradually becoming sentient and taking over our lives is still the stuff of science fiction (SciFi), for now at least.
Outside of SciFi and the movies — and if we implement a strict code of legal governance and law-abiding regulatory checking mechanisms — we should be able to build AI brains and the software applications they run safely enough.
*Adrian Bridgwater is a technology journalist who tracks enterprise software application development & data management. This article was first published in Forbes magazine.