Table of Contents
From Brussels to Los Angeles, the legislative response to the age of artificial intelligence (AI) by governments around the world is gradually picking up pace. For AI software development service providers, there are a number of critically important elements to AI laws that need to be unpacked. In addressing these, this article will focus on the two pillars of AI regulations around the world: the European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689 (“AI Act”) and the smorgasbord of AI regulations that are still cooling in California Governor Greg Newsom’s printers.
EU AI Regulations
The Breadth of the EU AI Act
One of the central tenants of the European Commission’s long-awaited AI Act is that it applies broadly to any business operating in the EU that offers AI services, products, or systems. That is, the law is not simply applicable to the ‘famous’ AI companies like OpenAI, Microsoft or Google, it may be enforced on any company operating in the EU and who sell any form of AI-generated service.
For AI software development service providers, this is a critically important aspect of these laws to understand, particularly since it is now relatively standard for clients to request the integration of AI into their systems. For both the service provider and the client, the AI Act establishes several obligations for deployers and providers in non-EU states in circumstances where the output produced by an AI system is being used in the EU. Such a broad scope, therefore, places both EU-based clients and AI software developers, irrespective of their location, squarely within the jurisdiction of the AI Act. It is critically important that all parties are aware of their obligations in this regard.
Definition of AI Systems Under the AI Act
Article 3(1) of the AI Act broadly defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. Such a definition clearly has the propensity to cover any number of projects being undertaken by AI software developers across the world, where the output of the project is within the EU (Article 2(1)(a), AI Act).
Risk-Based Approach to AI Regulations
Equally, what makes the AI Act, some could argue, surprisingly functional, is that it accepts the complexity of AI and assigns AI systems into risk categories, which carry proportionate obligations. This is to say that the EU Commission’s legislation has not applied blanket regulations for all AI systems. On the contrary, there is a rules-based order for understanding how the regulations could apply to AI-based programs. Indeed, the AI Act does not apply to open-source AI systems unless they are overtly prohibited under Article 5, and even for those that are prohibited, there are exceptions.
Ensuring Compliance and Understanding Client Obligations
Ultimately, it is imperative that AI software development service providers have a strong understanding of who their EU-based clients are, and where their deployed AI systems are operating since the AI Act does impose a number of obligations on anyone engaging in the development of such systems, including outside of Europe if those systems are being used in the EU. To be clear, Article 2 of the AI Act states that “providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country”. This makes the effectiveness of the discovery period of an AI project paramount to ensuring a compliant product, no matter where client’s and their AI software providers are located, should the output of the product be within the EU.
Whilst these laws are perhaps sweeping in their application, they are equally effective in their clarity. It is up to AI software developers to work effectively with their clients to understand the scope of an AI-based project. In circumstances where such a project is as intrinsically linked to the EU as noted above, clients and their AI software developers should consider where the AI product/potential lies according to the AI Act’s risk-based criteria.
AI Regulations in the US
California’s Balancing Act
Whilst the AI Act and California’s suite of regulations broadly have the same aim; to promote responsible AI development, there are a number of differences in their approaches and implications. Regarded widely as the ‘tech state’, California’s patchwork of regulations has been largely aimed at encouraging AI development, whilst simultaneously safeguarding public interests. More specifically, California’s AI regulations focus primarily on enhancing transparency and consumer protection for AI-driven products.
However, the implications of these laws for AI software development service providers are just as applicable as the sweeping provisions of the AI Act. This is because, for most AI software development companies, clients are requesting the integration of generative AI systems, which are systems that can learn from data and “generate” new content.
Key Legislative Requirements
One of the many bills signed in September 2024, AB 2013 provides a general definition for AI, as adopted by other jurisdictions such as with the AI Act in Europe and Colorado’s AI law. The legislation states that AI is “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” Such a sweeping definition ensures that, similar to the EU’s AI Act, most AI-based projects that are publicly available to Californians will be subject to the reporting requirements set out in the bill.
These standards require AI developers (being “a person, partnership, state or local government agency, or corporation that designs, codes, produces, or substantially modifies an artificial intelligence system or service for use by members of the public”) to include documentation outlining the data used to train the service.
Preparing for Future Compliance
It is important that clients understand such requirements, and equally, that AI software development providers are aware of the dependencies being used in the delivery of generative AI services and factor such regulations into the delivery of AI projects for clients. Specifically, the law states that from January 1, 2026, generative AI systems will require documentation regarding:
- Data point information and the purpose/methodology of any data collection;
- Whether the developer purchased or licensed the datasets used by the system;
- Whether the datasets contain personal information; and
- The use of synthetic information for ongoing development and training.
Sector-Specific Regulations in California
Backed by these reporting obligations, California went further and crafted specific laws for various industries, from healthcare (AB 3030, which mandates that health providers using generative AI for patient communications must include disclaimers and contact instructions for human providers) to protections against AI-generated deepfakes (AB 2905, which requires prerecorded messages using an artificially intelligent voice to inform listeners of this during the initial natural voice announcement). Further, Governor Newsom’s much talked about vetoing of SB 1047 only emphasizes the Golden State’s efforts to pass only the most specific and effective legislation for such a nascent industry. It was reasoned that that bill’s emphasis on regulating only large-scale AI models would only provide a false sense of security since it would potentially neglect smaller, specialized AI models that may pose even greater risks for data privacy.
Therefore, whilst perhaps seemingly an impressionistic approach to regulating AI, the tech state is providing a roadmap for the rest of the world to follow in considering the plethora of impacts that AI is having on everyday life and how best to protect consumers. For AI software development service providers, this translates as a recommendation to consider such impacts for clients requesting the development of generative AI systems, at the very least, and to maintain a data provenance record that traces the lineage of data used to train AI systems.
Conclusion
The avalanche of new regulations in Europe and California represents significant compliance milestones in the journey toward responsible AI development. As illustrated in this article, they are comprehensive, yet flexible. Indeed, whilst they pose challenges for AI software development service providers, they also offer clients opportunities for innovation, ethical practices, and a competitive advantage. Companies who prioritize compliance with these new regulations will be best placed to thrive as the global AI landscape continues to evolve.
In understanding and staying ahead of these regulatory changes, AI software developers such as the team at Scopic Inc will continue to contribute to a responsible and trustworthy AI ecosystem – one that not only meets regulatory requirements but also serves the broader interests of the client.
About The AI Laws and Regulations Guide
This guide was authored by Joseph Chigwidden, In-House Legal Consultant.
Scopic provides quality and informative content, powered by our deep-rooted expertise in software development. Our team of content writers and experts have great knowledge in the latest software technologies, allowing them to break down even the most complex topics in the field. They also know how to tackle topics from a wide range of industries, capture their essence, and deliver valuable content across all digital platforms.