A lot of acronyms: the EU’s AI Act, the newly formed NAIAC, AI announcements at Google I/O, talks at ODSC East, and Arthur’s new RBAC feature.
Deep Dive
The Latest in AI Regulation & Government
There’s been no shortage lately in news that relates to AI regulation and government. Let’s recap what’s been going on and the potential implications for companies around the world.
The AI Act: The EU is in the process of drafting the AI Act, which is the first-ever attempt to enact a horizontal regulation of AI. It separates AI applications into three risk categories—unacceptable risk, high risk, and low/minimal risk—and either bans them or requires internal checks based on where the technology falls in that ranking.
National AI Advisory Committee: In the Biden administration’s first substantive step to shape U.S. policy on AI, the NAIAC was formed last month. It includes executives from companies like Google, Microsoft, AWS, and Salesforce as well as academics from Carnegie Mellon, Stanford, and Cornell. This month, the group met for the first time to set a vision.
CT’s New Requirement: Regulation is happening on the state level as well. Connecticut recently released a notice that reminds insurers of their obligation to ensure that their use of Big Data/AI complies with applicable anti-discrimination laws, and also requires all CT domestic insurers to complete an annual data certification by September 1.
Whether it’s international, federal, or at the state/city level, it’s clear that AI-related concerns and regulation are increasing. In this expanded regulatory environment, organizations—especially those in highly regulated industries like financial services or healthcare—will need internal tools and systems (like Arthur’s bias detection and MRM) to manage their AI risk.
Among the AI-related announcements at Google’s annual developer conference were the AI Test Kitchen, chain of thought reasoning in PaLM and LaMDA, the world’s largest publicly available ML hub, and a new 10-grade skin tone scale to improve AI diversity.
Continuing the trend of large deep learning models, OpenAI recently introduced the second-generation version of DALL·E: an AI model trained to create realistic images and art from a description in natural language.
Democratizing Access to Large-Scale Language Models with OPT-175B
Meta AI announced it will be sharing its Open Pretrained Transformer (OPT-175B) language model. As part of its commitment to open science and transparency, the release includes both the pretrained models and the code needed to train and use them.
We’re excited to be the first MLOps observability platform to roll out custom Role-Based Access Control. This unique capability will help enterprises, especially those subject to regulation mandating algorithmic transparency and auditing, strengthen data privacy and reduce compliance risks.
Last month, three members of the Arthur team spoke at the ODSC East conference in Boston about drift detection, counterfactual explanations, and more. We were also proud to share the stage with one of our long-time partners, Humana, for a session on operationalizing fair ML.
Webinar: Translating ML to Business Value in Financial Services
In our upcoming financial services webinar on June 9, join Arthur’s VP of ML as he shares some lessons he’s learned over the years when trying to bring data science projects into production and truly impact business value. You’ll come away with best practices and actionable insights for deploying AI and ML in your financial services organization.