With the first draft of the Code of Practice for general-purpose AI in hand, we are starting to see differences emerge between the stakeholders.
Industry, civil society, and academia are voicing their input in working groups meeting this week, while working on written input due on 28 November.
The working group meetings feature input from pre-selected set of speakers and the chairs responding to the most up-voted questions submitted through the Slido platform, in what was called an "interactive session.”
Some may have missed the opportunity to voice their opinions considering the short deadline given last week. For the working group on Monday, the most up-voted question had 5 votes, one person involved told Euractiv.
From speaking with stakeholders about the first draft and hearing what was discussed in the meeting, here are some of the axes of disagreement: What's in the AI Act and what's not. Industry has said several times they're afraid the CoP will "go further" than the AI Act. The current draft of the code makes sure to repeatedly reference relevant parts of the AI Act, especially in contentious areas. One of these areas are third-party audits, which industry stakeholders say are outside the scope of the Act, but is included in the CoP. The CoP refers to a recital that reads "this Regulation should require providers to perform the necessary model evaluations [...] as appropriate, through internal or independent external testing." The CoP does not set the specific requirements for when providers have to do third-party testing, so some civil society organisations see it as still open and hope it will be mandated in the CoP. One of the "open questions" in the draft asks what circumstances makes pre-deployment third-party testing appropriate. Exemption of SMEs and startups. Some industry representatives argued that the same rules should apply to everyone, not exempting SMEs and startups. A civil society representative told Euractiv that this could be a tactic to water down regulations; if everyone has to comply with it, it should also be mild. Copyright issues might be a direct conflict of interest between industry aiming to make it easy to use data and rightsholders aiming to protect their intellectual property and get compensation. It will be discussed at a working group on Thursday. Open source models should be exempt from some requirements, but this might cause a consistency challenge for models posing systemic risk. Can the CoP apply less lenient standards for models posing the same kinds of risk that are less controllable?
The first draft is still (perhaps by design) lacking in detail, especially for working groups 2-4, so the tension between different interest groups should increase as the code is narrowed down. |