
24.8K
Downloads
91
Episodes
Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.
Episodes

Tuesday Mar 04, 2025
AI explained: The EU AI Act, the Colorado AI Act and the EDPB
Tuesday Mar 04, 2025
Tuesday Mar 04, 2025
Partners Catherine Castaldo, Andy Splittgerber, Thomas Fischl and Tyler Thompson discuss various recent AI acts around the world, including the EU AI Act and the Colorado AI Act, as well as guidance from the European Data Protection Board (EDPB) on AI models and data protection. The team presents an in-depth explanation of the different acts and points out the similarities and differences between the two. What should we do today, even though the Colorado AI Act is not in effect yet? What do these two acts mean for the future of AI?
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Catherine: Hello, everyone, and thanks again for joining us on Tech Law Talks. We're here with a really good array of colleagues to talk to you about the EU AI Act, the Colorado AI Act, the EDPB guidance, and we'll share some of those initials soon on what they all mean. But I'm going to let my colleagues introduce themselves. Before I do that, though, I'd like to say if you like our content, please consider giving us a five-star review wherever you find us. And let's go ahead and first introduce my colleague, Andy.
Andy: Yeah, hello, everyone. My name is Andy Splittgerber. I'm a partner at Reed Smith in the Emerging Technologies Department based out of Munich in Germany. And looking forward to discussing with you interesting data protection topics.
Thomas: Hello, everyone. This is Thomas, Thomas Fischl in Munich, Germany. I also focus on digital law and privacy. And I'm really excited to be with you today on this podcast.
Tyler: Hey everyone, thanks for joining. My name is Tyler Thompson. I'm a partner in the emerging technologies practice at Reed Smith based in the Denver, Colorado office.
Catherine: And I'm Catherine Castaldo, a partner in the New York office. So thanks to all my colleagues. Let's get started. Andy, can you give us a very brief overview of the EU AI app?
Andy: Sure, yeah. It came into force in August 2024. And it is a law about mainly the responsible use of AI. Generally, it is not really focused on data protection matters. Rather, it is next to the world-famous European Data Protection Regulation. It has a couple of passages where it refers to the GDPR and also sometimes where it states that certain data protection impact assessments have to be conducted. Other than that, it has its own concept dividing up AI systems. And we're just expecting a new guidance on how authorities and how the commission interprets what AI systems are. So watch out for that. Into different categories, prohibited AI, high-risk AI, and then normal AI systems. There are also special rules on generative AI, and then some rules on transparency requirements when organizations use AI towards ends customers. And depending on these risk categories, there are certain requirements, and attaching to each of these categories, developers, importers, and also users as like organizations of AI have to comply with certain obligations around accountability, IT security, documentation, checking, and of course, human intervention and monitoring. This is the basic concept and the rules start to kick in February 2nd, 2025 when prohibited AI must not be used anymore in Europe. And the next bigger wave will be on August 2nd, 2025 when the rules on generative AI kick in. So organizations should start and be prepared to comply with these rules now and get familiar with this new type of law. It's kind of like a new area of law.
Catherine: Thanks for that, Andy. Tyler, can you give us a very brief overview of the Colorado AI Act?
Tyler: Sure, happy to. So Colorado AI Act, this is really the first comprehensive AI law in the United States. Passed at the end of the 2024 legislative session. it covers developers or deployers that use a high-risk AI system. Now, what is a high-risk AI system? It's just a system that makes a consequential decision. What is a consequential decision? These can include things like education decisions, employment opportunities, employment related decisions, financial lending service decisions, if it's an essential government service, a healthcare service, housing, insurance, legal services. So that consequential decision piece is fairly broad. The effective date of it is February 1st of 2026, and the Colorado AG is going to be enforcing it. There's no private right of action here, but violating the Colorado AEI Act is considered an unfair and deceptive trade practice under Colorado law. So that's where you get the penalties of the Colorado AEI Act. It's tied into the Colorado deceptive trade practice.
Catherine: That's an interesting angle. And Tom, let's turn to you for a moment. I understand that the European Data Protection Board, or EDPB, has also recently released some guidance on data protection in connection with artificial intelligence. Can you give us some high-level takeaways from that guidance?
Thomas: Sure, Catherine, and it's very true that the EDPB has just released a statement. It actually has been released in December of last year. And yeah, they have released that highly anticipated statement on AI models and data protection. This statement of the EDPB follows actually a much-discussed paper published by the German Hamburg Data Protection Authority in July of last year. And I also wanted to briefly touch upon this paper. Because the Hamburg Authority argued that AI models, especially large language models, are anonymous when considered separately. They do not involve the processing of personal data. To reach this conclusion, the paper decoupled the model itself from, firstly, the prior training of the model, which may involve the collection and further processing of personal data as part of the training data set. And secondly, the subsequent use of the model, where a prompt may contain personal data and output may be used in a way that means it represents personal data. And interestingly, this paper considered only the AI model itself and concluded that the tokens and values that make up the inner processes of a typical AI model do not meaningfully relate to or correspond with information about identifiable individuals. And consequently, the model itself was classified as anonymous, even if personal data is processed during the development and the use of the model. So the EDPB statement, recent statement, does actually not follow this relatively simple and secure framework proposed by the German authority. The EDPB statement responds actually to a request from the Irish Data Protection Commission and gives kind of a framework, just particularly with respect to certain aspects. It actually responds to four specific questions. And the first question was, so under what conditions can AI models be considered anonymous? And the EDPB says, well, yes, it can be considered anonymous, but only in some cases. So it must be impossible with all likely means to obtain personal data from the model either through attacks aimed at extracting the original training data or through other interactions with the AI model. The second and third questions relate to the legal basis of the use and the training of AI models. And the EDPB answered those questions in one answer. So the statement indicates that the development and use of AI models can. Generally be based on a legal basis of legitimate interest, then the statement lists a variety of different factors that need to be considered in the assessment scheme according to Article 6 GDPR. So again, it refers to an individual case-by-case analysis that has to be made. And finally, the EDPB addresses the highly practical question of what consequences it has for the use of an AI model if it was developed in violation of data protection regulations. The EDPB says, well, this partly depends on whether the EI model was first anonymized before it was disclosed to the model operator. And otherwise, the model operator may need to assess the legality of the model's development as part of their accountability obligations. So quite interesting statement.
Catherine: Thanks, Tom. That's super helpful. But when I read some commentary on this paper, there's a lot of criticism that it's not very concrete and doesn't provide actionable guidance to businesses. Can you expand on that a little bit and give us your thoughts?
Thomas: Yeah, well, as is sometimes the case with these EDPB statements, which necessarily reflect the consensus opinion of authorities from 27 different member states. The statement does not provide many clear answers. So instead, the EDPP offers kind of indicative guidelines and criteria and calls for case-by-case assessments of AI models to understand whether and how they are affected by the GDPR. And interestingly, someone has actually counted how often the phrase case-by-case appears in the statement. It appears actually 16 times. and can or could appears actually 161 times so. Obviously, this is likely to lead to different approaches among data protection authorities, but it's maybe also just an intended strategy of the EDPB. Who knows?
Catherine: Well, as an American, I would read that as giving me a lot of flexibility.
Thomas: Yeah, true.
Catherine: All right, let's turn to Andy for a second. Andy, also in view of the AI Act, what do you now recommend organizations do when they want to use generative AI systems?
Andy: That's a difficult question after 161 cans and goods. We always try to give practical advice. And I mean, with regard, like if you now look at the AI Act plus this EDPB paper or generally GDPR, there are a couple of items where organizations can prepare and need to prepare. First of all, organizations using generative AI must be aware that a lot of the obligations is on the developers. So the developers of generative AI definitely have more obligations, especially under the AI Act, for example. They have to create and maintain the model's technical documentation, including the training and testing processes, monitor the AI system. They must also, which can be really painful and will be painful, they have to make available a detailed summary of the content that was used for the training for the model. And this goes very much also into copyright topics. So there are a lot of obligations and none of these are on the using side. So if organizations use generative AI, they don't have to comply with all of this, but they have to, and that's our recommendation, ensure in their agreements when they license the model or the AI system, get the confirmation by the developer that the developer complies with all of these obligations. That's kind of like the supply chain compliance in AI. So that's one of the aspects from the using side. Make sure in your agreement that the provider complies with AI Act. Other items for the agreement when licensing AI, generative AI systems or AI is attaching to what Thomas said. Getting a statement from the developer whether or not the model itself contains personal data. The ideal answer is no, the model does not contain personal data because then we don't have the poisonous tree. If the developer was not in compliance with GDPR or data protection laws when doing the training, there is a cut. If the model does not contain any personal data, then this cannot infect the later use by the using organization. So this is a very important statement. We have not seen this in practice very often so far, and it is quite a strong commitment developers are asked to give, but it is something at least to be discussed in the negotiations. So that's the second point. A third point for the agreement with the provider is whether or not the usage data is used for further training that can create data protection issues and might require using organizations to solicit consent or other justifications from their employees or users. And then, of course, having in place a data processing agreement with the provider or developer of the generative AI system if it runs on someone else's systems. So these are all items for the contracts, and we think this is something that needs to be tackled now because it always takes a while until the contract is negotiated and in place. And on top of this, as I said, the AI Act obligations are rather limited. There's only some transparency only, but it's transparency obligations for using organizations to, for example, inform their employees that they're using AI to inform end users that a certain whatever text or photo or article was created by AI. So like a tag, this was created by AI being transparent that AI was used to develop something. And then on top of this, the general GDPR compliance requirements apply, like transparency about what personal data is processed when the AI is used. Justification of processing, add the AI system to your role paths, and also check if potentially data protection impact assessment is required. This will mainly be the case if the AI has intensive impact on the personality of data subjects' data. So these are the general requirements. So takeaways, look, check the contracts, check the limited transparency requirements under AI Act, and comply with what you know already under GDPR.
Tyler: It's interesting because there is a lot of overlap between the EU AI Act and the Colorado AI Act. But Colorado, it does have that robust impact assessment requirements. You know, you've got to provide notification. You have to provide opt-out rights and appeal. You do have some of that publicly facing notice requirement as well. And so the one thing that I think I want to highlight that's a little bit different, we have an AG notification requirement. So if you discover that your artificial intelligence system has been creating an effect that could be considered algorithmic discrimination, you have an affirmative duty to notify the attorney general. So that's something that's a little bit different. But I think overall, there's a lot of overlap between the Colorado AI Act and the EU AI Act. And I like Andy's analogy of the supply chain, right? Colorado as well. Yes, it applies to the developers, but it also applies to deployers. And on the deployer side, it is kind of that supply chain type of analogy of these are things that you as a deployer, you need to go back, look at your developer, make sure you have the right documentation, that you've checked the right boxes there and have done the right things.
Catherine: Thanks for that, Tyler. Do you think we're entering into an area where the U.S. States might produce more AI legislation?
Tyler: I think so. Virginia has proposed a version of basically the Colorado AI Act. And I honestly think we could see the same thing with these AI bills that we have seen with privacy on the US side, which is kind of a state-specific approach. Some states adopting the same or highly similar versions of the laws of other states, but then maybe a couple states going off on their own and doing something unique. So it would not be surprising to me at all, at least in the short to midterm. We have a patchwork of AI laws throughout the United States just based on individual state law.
Catherine: Thanks for that. And I'm going to ask a question to both Tyler and Tom and Andy. Either one of you can answer, whoever thinks of this. But we've been reading a lot lately about DeepSeek and all the cyber insecurities, essentially, with utilizing a system like that and some failures on the part of the developers there. Is there any security requirement in either one of the EU or Colorado-based AI acts for deploying or developing a new system?
Tyler: Yeah, for sure. So where your security requirements are going to come in, I think, is in the impact assessment piece, right? Where, you know, when you have to look at your risks and how this could affect an individual, whether through a discrimination issue or other type of risk to it, you're going to have to address that in the discrimination piece. So while it's not like a specific security provision, there's no way that you're going to get around some of these security requirements because you have to do that very robust impact assessment, right? Part of that analysis under the impact assessment is known or reasonably foreseeable risks. So things like that, you're going to have to, I would say, address via some of the security requirements facing the AI platform.
Catherine: Great. And what about from the European side?
Andy: Yes, similar from the European side or perhaps even a bit more, definitely robustness, cybersecurity, IT security is like a major portion of the AI Act. So that's definitely a very, very important obligation and duty that must be compliant.
Catherine: And I would think too under GDPR, because you have to ensure adequate technical and organizational measures that if you had personal information going into the AI system, you'd have to comply with that requirement as well, since they stand side by side.
Andy: Exactly, exactly. And then there's under both also notification obligations if something goes wrong.
Catherine: Well, good to know. All right, well, maybe we'll do a future podcast on the impact of the NIST AI risk management framework and the application to both of these large bodies of law. But I thank all my colleagues for joining us today. We have time for just a quick final thought. Does anyone have one?
Andy: Thought from me after the AI Act came into force now, I'm as a practical European worried that we're killing the AI industry and innovation in Europe. It's kind of like good to see that at least some states in the U.S. follow a bit of a similar approach, even if it's, you know, different. Perhaps I haven't given up the hope for a more global solution. Perhaps the AI Act will be also adjusted a bit to then come more to a closer global solution.
Tyler: On the U.S., I'd say, look, my takeaway is start now, start thinking about some of this stuff now. It can be tempting to say it's just Colorado. You know, we have till February of 2026. I think a lot of these things that the Colorado AI Act and even the EU AI Act are requiring are arguably things that you should be doing anyway. So I would say start now, especially as Andy said, on the contract side, if nothing else. We'd start thinking about doing a deal with a developer or a deployer. What needs to be in that agreement? How do we need to protect ourselves? And how do we need to look at the regulatory space to future-proof this so that when we come to 2026, we're not amending 30, 40 contracts?
Thomas: And maybe a final thought from my side. So the EDPB statement does only answer a few questions, actually. So it doesn't touch other very important issues like automated decision-making. There is nothing in the document. There is not really anything about sensitive data. The use of sensitive data, data protection impact assessments are not addressed. So a lot of topics that remain unclear, at least there is no guidance yet.
Catherine: Those are great views and I'm sure really helpful to all of our listeners who have to think of these problems from both sides of the pond. And thank you for joining us again on Tech Law Talks. We look forward to speaking with you again soon.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.