本期内容为欧盟《人工智能法案》中英对照版,法案英文全文9万余字、中文译文约6万字,囿于公号篇幅限制,分四期发出。本期为第72-113条。
CHAPTER IX POST-MARKET MONITORING, INFORMATION SHARING AND MARKET SURVEILLANCE |
第九章上市后监测、信息共享和市场监管 |
SECTION 1 Post-market monitoring |
|
Article 72 Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems 1. Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the AI technologies and the risks of the high-risk AI system. 2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities. 3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by 2 February 2026. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2). 4. For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, where a post-market monitoring system and plan are already established under that legislation, in order to ensure consistency, avoid duplications and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the necessary elements described in paragraphs 1, 2 and 3 using the template referred in paragraph 3 into systems and plans already existing under that legislation, provided that it achieves an equivalent level of protection. The first subparagraph of this paragraph shall also apply to high-risk AI systems referred to in point 5 of Annex III placed on the market or put into service by financial institutions that are subject to requirements under Union financial services law regarding their internal governance, arrangements or processes. |
第七十二条 高风险人工智能系统提供者的上市后监测和上市后监测计划 1、 提供者应当采用与人工智能技术的性质和高风险人工智能系统的风险相匹配的方式,建立并记录一套上市后监测系统。 2、 上市后监测系统应当主动、系统地收集、记录和分析部署者提供或通过其他来源收集的、与高风险人工智能系统在其整个生命周期内的性能有关的数据,使提供者能够评估人工智能系统是否持续符合本法第三章第二节规定。在相关的情况下,上市后监测应当包括对(系统)与其他人工智能系统交互的分析。上述义务不包括执法机构部署者的敏感操作数据。 3、 上市后监测系统应当以上市后监测计划为基础。上市后监测计划应当作为本法附录四所述技术文档的一部分。欧盟委员会应当在2026年2月2日之前制定一份实施细则作出详细规定,制定上市后监测计划模板和计划中应包含的要素清单。该等实施细则应当按照本法第九十八条第2款规定经审查程序审查通过。 4、 对于本法附录一第A条项下欧盟统一立法所涵盖的高风险人工智能系统,如果已经根据该立法建立上市后监测系统和计划,为确保一致性、避免重复和尽量减少额外负担,在确保能够达到同等保护水平的前提下,提供者可以选择使用本条第3款所述模板,视情况将本条第1款、第2款和第3款所规定的必要要素整合到其根据该法建立的现有系统和计划中。 本款第一项也适用于本法附录三第5条项下、由受欧盟金融服务法有关其内部治理、安排或流程要求约束的金融机构投放到市场或投入使用的高风险人工智能系统。 |
SECTION 2 Sharing of information on serious incidents |
第二节 重大事件信息共享 |
Article 73 Reporting of serious incidents 1. Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred. 2. The report referred to in paragraph 1 shall be made immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident. The period for the reporting referred to in the first subparagraph shall take account of the severity of the serious incident. 3. Notwithstanding paragraph 2 of this Article, in the event of a widespread infringement or a serious incident as defined in Article 3, point (49)(b), the report referred to in paragraph 1 of this Article shall be provided immediately, and not later than two days after the provider or, where applicable, the deployer becomes aware of that incident. 4. Notwithstanding paragraph 2, in the event of the death of a person, the report shall be provided immediately after the provider or the deployer has established, or as soon as it suspects, a causal relationship between the high-risk AI system and the serious incident, but not later than 10 days after the date on which the provider or, where applicable, the deployer becomes aware of the serious incident. 5. Where necessary to ensure timely reporting, the provider or, where applicable, the deployer, may submit an initial report that is incomplete, followed by a complete report. 6. Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action. The provider shall cooperate with the competent authorities, and where relevant with the notified body concerned, during the investigations referred to in the first subparagraph, and shall not perform any investigation which involves altering the AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action. 7. Upon receiving a notification related to a serious incident referred to in Article 3, point (49)(c), the relevant market surveillance authority shall inform the national public authorities or bodies referred to in Article 77(1). The Commission shall develop dedicated guidance to facilitate compliance with the obligations set out in paragraph 1 of this Article. That guidance shall be issued by 2 August 2025, and shall be assessed regularly. 8. The market surveillance authority shall take appropriate measures, as provided for in Article 19 of Regulation (EU) 2019/1020, within seven days from the date it received the notification referred to in paragraph 1 of this Article, and shall follow the notification procedures as provided in that Regulation. 9. For high-risk AI systems referred to in Annex III that are placed on the market or put into service by providers that are subject to Union legislative instruments laying down reporting obligations equivalent to those set out in this Regulation, the notification of serious incidents shall be limited to those referred to in Article 3, point (49)(c). 10. For high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulations (EU) 2017/745 and (EU) 2017/746, the notification of serious incidents shall be limited to those referred to in Article 3, point (49)(c) of this Regulation, and shall be made to the national competent authority chosen for that purpose by the Member States where the incident occurred. 11. National competent authorities shall immediately notify the Commission of any serious incident, whether or not they have taken action on it, in accordance with Article 20 of Regulation (EU) 2019/1020. |
第七十三条 重大事件报告 1、 投放到欧盟市场的高风险人工智能系统的提供者应当向事件发生地成员国的市场监管机构报告所有的重大事件。 2、 本条第1款所规定的报告应当在提供者确定人工智能系统与重大事件之间的因果关系或该等关系的合理可能性后立即提交,并不得晚于提供者或部署者(如适用)知道到重大事件后15天。 本款第一项所述报告期限应当考虑重大事件的严重程度。 3、 尽管有本条第2款规定,但是,如果发生大规模侵权或本法第三条第49款(b)项定义的重大事件,应当立即提交本条第1款所述的报告,且不得晚于提供者或部署者(如适用)知道该事件之日起两天。 4、 尽管有本条第2款规定,但是,如果发生人员死亡,应当在提供者或部署者确定或怀疑高风险人工智能系统与重大事件之间存在因果关系后立即报告,且不得晚于提供者或部署者(如适用)知道重大事件之日起十天。 5、为确保报告的及时性,提供者或部署者可以在必要时在提交完整报告之前先提交一份不完整的初步报告。 6、在根据本条第1款规定报告重大事件后,提供者应当尽快对重大事件和相关人工智能系统进行必要的调查。其中应当包括对事件的风险评估和纠正措施。 在本款第一项所述调查期间,提供者应当与主管机关和相关评定机构合作,在通知主管机关之前,不得以可能影响后续的事件原因评估的方式进行任何改变相应人工智能系统的调查。 7、在收到与本法第三条第(49)款(c)项所定义重大事件有关的通知后,相关市场监管机构应当通知本法第七十七条第1款项下国家公权力机关。欧盟委员会应当制定专门的指导方针,推动各方遵守本条第1款规定的义务。上述指导方针应当于2025年8月2日前发布,并应定期进行评估。 8、 市场监管机构应当在收到本条第1款所规定通知之日起七天内,根据第2019/1020号条例第19条规定采取适当措施,并应当遵守该条例规定的通知程序。 9、 对于本法附录三所述受欧盟法约束的提供者投放到市场或投入使用的高风险人工智能系统,如果该等欧盟法规定了与本法项下报告义务相当的报告义务,重大事件的通知应当仅适用于本法第三条第(49)款(c)项所述情况。 10、对于作为设备之安全组件或本身属于第2017/745号和第17/746号条例项下设备的高风险人工智能系统,应当通知的重大事件仅限于本法第三条第(49)款(c)项所述事件,并应当向事件发生地成员国为该目的指定的国家主管机关发出。 11、根据第2019/1020号条例第20条规定,无论其是否已经采取相应应对措施,国家主管机关都应当立即向欧盟委员会报告所有的重大事件。 |
SECTION 3 Enforcement |
第三节 实施 |
Article 74 Market surveillance and control of AI systems in the Union market 1. Regulation (EU) 2019/1020 shall apply to AI systems covered by this Regulation. For the purposes of the effective enforcement of this Regulation: (a) any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators identified in Article 2(1) of this Regulation; (b) any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI systems falling within the scope of this Regulation. 2. As part of their reporting obligations under Article 34(4) of Regulation (EU) 2019/1020, the market surveillance authorities shall report annually to the Commission and relevant national competition authorities any information identified in the course of market surveillance activities that may be of potential interest for the application of Union law on competition rules. They shall also annually report to the Commission about the use of prohibited practices that occurred during that year and about the measures taken. 3. For high-risk AI systems related to products covered by the Union harmonisation legislation listed in Section A of Annex I, the market surveillance authority for the purposes of this Regulation shall be the authority responsible for market surveillance activities designated under those legal acts. By derogation from the first subparagraph, and in appropriate circumstances, Member States may designate another relevant authority to act as a market surveillance authority, provided they ensure coordination with the relevant sectoral market surveillance authorities responsible for the enforcement of the Union harmonisation legislation listed in Annex I. 4. The procedures referred to in Articles 79 to 83 of this Regulation shall not apply to AI systems related to products covered by the Union harmonisation legislation listed in section A of Annex I, where such legal acts already provide for procedures ensuring an equivalent level of protection and having the same objective. In such cases, the relevant sectoral procedures shall apply instead. 5. Without prejudice to the powers of market surveillance authorities under Article 14 of Regulation (EU) 2019/1020, for the purpose of ensuring the effective enforcement of this Regulation, market surveillance authorities may exercise the powers referred to in Article 14(4), points (d) and (j), of that Regulation remotely, as appropriate. 6. For high-risk AI systems placed on the market, put into service, or used by financial institutions regulated by Union financial services law, the market surveillance authority for the purposes of this Regulation shall be the relevant national authority responsible for the financial supervision of those institutions under that legislation in so far as the placing on the market, putting into service, or the use of the AI system is in direct connection with the provision of those financial services. 7. By way of derogation from paragraph 6, in appropriate circumstances, and provided that coordination is ensured, another relevant authority may be identified by the Member State as market surveillance authority for the purposes of this Regulation. National market surveillance authorities supervising regulated credit institutions regulated under Directive 2013/36/EU, which are participating in the Single Supervisory Mechanism established by Regulation (EU) No 1024/2013, should report, without delay, to the European Central Bank any information identified in the course of their market surveillance activities that may be of potential interest for the prudential supervisory tasks of the European Central Bank specified in that Regulation. 8. For high-risk AI systems listed in point 1 of Annex III to this Regulation, in so far as the systems are used for law enforcement purposes, border management and justice and democracy, and for high-risk AI systems listed in points 6, 7 and 8 of Annex III to this Regulation, Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Regulation (EU) 2016/679 or Directive (EU) 2016/680, or any other authority designated pursuant to the same conditions laid down in Articles 41 to 44 of Directive (EU) 2016/680. Market surveillance activities shall in no way affect the independence of judicial authorities, or otherwise interfere with their activities when acting in their judicial capacity. 9. Where Union institutions, bodies, offices or agencies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as their market surveillance authority, except in relation to the Court of Justice of the European Union acting in its judicial capacity. 10. Member States shall facilitate coordination between market surveillance authorities designated under this Regulation and other relevant national authorities or bodies which supervise the application of Union harmonisation legislation listed in Annex I, or in other Union law, that might be relevant for the high-risk AI systems referred to in Annex III. 11. Market surveillance authorities and the Commission shall be able to propose joint activities, including joint investigations, to be conducted by either market surveillance authorities or market surveillance authorities jointly with the Commission, that have the aim of promoting compliance, identifying non-compliance, raising awareness or providing guidance in relation to this Regulation with respect to specific categories of high-risk AI systems that are found to present a serious risk across two or more Member States in accordance with Article 9 of Regulation (EU) 2019/1020. The AI Office shall provide coordination support for joint investigations. 12.Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the documentation as well as the training, validation and testing data sets used for the development of high-risk AI systems, including, where appropriate and subject to security safeguards,through application programming interfaces (API) or other relevant technical means and tools enabling remote access. 13.Market surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when both of the following conditions are fulfilled: (a) access to source code is necessary to assess the conformity of a high-risk AI system with the requirements set out in Chapter III, Section 2; and (b) testing or auditing procedures and verifications based on the data and documentation provided by the provider have been exhausted or proved insufficient. 14. Any information or documentation obtained by market surveillance authorities shall be treated in accordance with the confidentiality obligations set out in Article 78. |
第七十四条 欧盟市场中的人工智能系统的市场监管和控制 1、第2019/1020号条例适用于本法项下的人工智能系统。为有效执行本法: (a)第2019/1020号条例提及的商业运营者都应当被理解为包括本法第二条第1款所规定的所有运营者; (b)第2019/1020号条例项下所有产品都应被理解为包括本法项下的所有人工智能系统。 2、作为第2019/1020号条例第34条第(4)款项下报告义务的一部分,市场监管机构应当每年向欧盟委员会和相关国家竞争主管机关报告在市场监管过程中发现的可能与欧盟竞争规则法的适用有潜在利益影响的任何信息。还应当每年向欧盟委员会报告当年发生的禁止类行为的发生情况及其采取的应对措施。 3、对于与本法附录一第A条所列欧盟统一立法涵盖的产品有关的高风险人工智能系统,在本法项下,市场监管机构作为根据该等欧盟法指定的负责市场监管活动的机构。 通过限缩本款第一项,成员国可以在适当情况下指定另一个相关机构作为市场监管机构,但应当确保与负责执行附录一所列欧盟统一立法的相关行业市场监管机构相协调。 4、如果本法附录一第A条所列欧盟统一立法已经规定了能够确保同等保护水平和相同目标的程序,那么,本法第七十九条至第八十三条所规定程序不适用于与附录一第A条所列欧盟统一立法所涵盖产品有关的人工智能系统。此种情况下,应当替代适用相关的行业程序。 5、在不减损第2019/1020号条例第14条所规定的市场监管机构权力的情况下,为确保本法的有效执行,市场监管机构可以视情况远程行使上述条例第14条第(4)款(d)项和(j)项所规定的权力。 6、对于被投放到市场、投入使用或受欧盟金融服务法监管的金融机构使用的高风险人工智能系统,在本法项下,只要人工智能系统被投放到市场、投入使用或使用与提供该等金融服务直接相关,市场监管机构即为根据该等金融服务法规定负责对上述机构进行金融监督的相关国家机关。 7、作为对本条第6款的限制,在确保协调的前提下,成员国可为实现本法之目的在适当情况下指定另一个相关机构作为市场监管机构。 监管第2013/36/EU号指令项下受监管信贷机构的国家市场监管机构,如果加入按第1024/2013号条例建立的单一监督机制,应当尽快向欧洲中央银行报告其在市场监督活动过程中发现的、可能对该条例所规定的欧洲中央银行的审慎监督任务有潜在意义的所有信息。 8、对于被用于执法、边境管理、司法和民主目的的本法附录三第1条所列高风险人工智能系统,以及本法附录三第6条、第7条和第8条所列高风险人工智能系统,各成员国应当根据第2016/679号条例或第2016/680号指令规定指定主管数据保护的监管机构,或根据第2016/668号指令第41条至第44条规定的相同条件指定任何其他机构作为本法项下的市场监管机构。市场监管活动不得以任何方式影响司法机关的独立性,也不得以其他方式干涉其司法活动。 9、如果欧盟各机构落入本法的调整范围,欧洲数据保护监督员应当担任其市场监管机构,但行使司法权力的欧盟法院除外。 10、各成员国应当促进根据本法规定指定的市场监管机构与附录一所列欧盟统一立法或其他欧盟法律项下可能与附录三所列高风险人工智能系统监管有关的其他相关国家机关之间的协调。 11、市场监管机构和欧盟委员会应当可以提议由市场监管机构单独或与欧盟委员会共同开展联合行动(包括联合调查),根据第2019/1020号条例第9条规定,就其发现的对两个或两个以上成员国构成重大风险的特定类型高风险人工智能系统,完善其合规体系、识别不合规行为、提高认识或提供与本法有关的指导。人工智能办公室应当为上述联合调查提供协调支持。 12、在不影响第2019/1020号条例所规定(市场监管机构)的权力,具有相关性且仅限于完成(市场监管机构)其职责所需的情况下,提供者应当允许市场监管机构完整访问其用于开发高风险人工智能系统的文档及其训练、验证和测试数据集,包括在适当且有安全保障的情况下通过应用编程接口(API)或其他相关技术手段和工具实现远程访问。 13、下列两个条件均满足的情况下,市场监管机构才能合理要求访问高风险人工智能系统的源代码: (a)访问源代码对于评估高风险人工智能系统是否符合本法第三章第二节规定的要求确有必要;且 (b)以提供者所提供数据和文档为基础的测试或审计程序和验证已经穷尽,或被证明不充足。 14、市场监管机构获得的任何信息或文档均应当按照本法第七十八条规定的保密义务采取保密措施。 |
Article 75 Mutual assistance, market surveillance and control of general-purpose AI systems 1. Where an AI system is based on a general-purpose AI model, and the model and the system are developed by the same provider, the AI Office shall have powers to monitor and supervise compliance of that AI system with obligations under this Regulation. To carry out its monitoring and supervision tasks, the AI Office shall have all the powers of a market surveillance authority provided for in this Section and Regulation (EU) 2019/1020. 2. Where the relevant market surveillance authorities have sufficient reason to consider general-purpose AI systems that can be used directly by deployers for at least one purpose that is classified as high-risk pursuant to this Regulation to be non-compliant with the requirements laid down in this Regulation, they shall cooperate with the AI Office to carry out compliance evaluations, and shall inform the Board and other market surveillance authorities accordingly. 3. Where a market surveillance authority is unable to conclude its investigation of the high-risk AI system because of its inability to access certain information related to the general-purpose AI model despite having made all appropriate efforts to obtain that information, it may submit a reasoned request to the AI Office, by which access to that information shall be enforced. In that case, the AI Office shall supply to the applicant authority without delay, and in any event within 30 days, any information that the AI Office considers to be relevant in order to establish whether a high-risk AI system is non-compliant. Market surveillance authorities shall safeguard the confidentiality of the information that they obtain in accordance with Article 78 of this Regulation. The procedure provided for in Chapter VI of Regulation (EU) 2019/1020 shall apply mutatis mutandis. |
第七十五条 通用人工智能系统的互助、市场监管和控制 1、 如果人工智能系统以通用人工智能模型为基础,且该模型和系统由同一提供者开发,人工智能办公室有权监督该人工智能系统遵守本法规定的义务。为履行其监督和监管职责,人工智能办公室应当拥有本节和第2019/1020号条例规定的市场监管机构的所有权力。 2、 如果相关市场监管机构有充分理由认为,部署者至少为一个目的而可以直接使用按本法规定被归类为高风险的通用人工智能系统不符合本法规定的要求,应当与人工智能办公室合作进行合规性评估,并相应通知人工智能委员会和其他市场监管机构。 3、 如果市场监管机构即使已尽一切适当努力仍无法获取与通用人工智能模型有关的特定信息,因而无法完成对高风险人工智能系统的调查,其可以向人工智能办公室提交一份合理申请,由人工智能办公室强制获取该等信息。此种情况下,人工智能办公室应当尽快(最晚不超过30天内)向申请机构提供人工智能办公室认为有关的所有信息,以确定高风险人工智能系统是否违规。市场监管机构应当按照本法第七十八条规定对所获得的信息保密。第2019/1020号条例第六章规定的程序参照适用。 |
Article 76 Supervision of testing in real world conditions by market surveillance authorities 1.Market surveillance authorities shall have competences and powers to ensure that testing in real world conditions is in accordance with this Regulation. 2. Where testing in real world conditions is conducted for AI systems that are supervised within an AI regulatory sandbox under Article 58, the market surveillance authorities shall verify the compliance with Article 60 as part of their supervisory role for the AI regulatory sandbox. Those authorities may, as appropriate, allow the testing in real world conditions to be conducted by the provider or prospective provider, in derogation from the conditions set out in Article 60(4), points (f) and (g). 3. Where a market surveillance authority has been informed by the prospective provider, the provider or any third party of a serious incident or has other grounds for considering that the conditions set out in Articles 60 and 61 are not met, it may take either of the following decisions on its territory, as appropriate: (a) to suspend or terminate the testing in real world conditions; (b) to require the provider or prospective provider and the deployer or prospective deployer to modify any aspect of the testing in real world conditions. 4. Where a market surveillance authority has taken a decision referred to in paragraph 3 of this Article, or has issued an objection within the meaning of Article 60(4), point (b), the decision or the objection shall indicate the grounds therefor and how the provider or prospective provider can challenge the decision or objection. 5. Where applicable, where a market surveillance authority has taken a decision referred to in paragraph 3, it shall communicate the grounds therefor to the market surveillance authorities of other Member States in which the AI system has been tested in accordance with the testing plan. |
第七十六条 市场监管机构对真实场景条件下测试的监管 1、 市场监管机构应当有能力且有权确保根据本法规定在真实场景条件下进行测试。 2、 如果在真实场景条件下对本法第五十八条项下在人工智能监管沙盒中监督的人工智能系统进行测试,市场监管机构应当验证是否符合本法第六十条规定,作为其对人工智能监管沙盒之监督功能的一部分。该等机构可视情况允许提供者或潜在提供者在真实场景条件下进行测试,以限缩本法第六十条第4款(f)项和(g)项规定的条件。 3、 如果市场监管机构已经收到提供者、潜在提供者或任何第三方的重大事件通知,或者有其他理由认为不符合本法第六十条和第六十一条规定的条件,则可视情况在其境内作出以下任一决定: (a)暂停或终止在真实场景条件下的测试; (b)要求提供者或潜在提供者以及部署者或潜在部署者在真实场景条件下修改测试相关的任何事项。 4、 如果市场监管机构已经作出本条第3款所述决定,或已经发出本法第六十条第4款(b)项所规定的异议,该等决定或异议应当说明理由,和提供者或潜在提供者如何就该决定或异议提出异议。 5、 市场监管机构已经作出本条第3款项下决定(如适用),其应当与已经根据测试计划完成相应人工智能系统测试的其他成员国的市场监管机构交流其作出决定的依据。 |
Article 77 Powers of authorities protecting fundamental rights 1. National public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights, including the right to non-discrimination, in relation to the use of high-risk AI systems referred to in Annex III shall have the power to request and access any documentation created or maintained under this Regulation in accessible language and format when access to that documentation is necessary for effectively fulfilling their mandates within the limits of their jurisdiction. The relevant public authority or body shall inform the market surveillance authority of the Member State concerned of any such request. 2. By 2 November 2024, each Member State shall identify the public authorities or bodies referred to in paragraph 1 and make a list of them publicly available. Member States shall notify the list to the Commission and to the other Member States, and shall keep the list up to date. 3. Where the documentation referred to in paragraph 1 is insufficient to ascertain whether an infringement of obligations under Union law protecting fundamental rights has occurred, the public authority or body referred to in paragraph 1 may make a reasoned request to the market surveillance authority, to organise testing of the high-risk AI system through technical means. The market surveillance authority shall organise the testing with the close involvement of the requesting public authority or body within a reasonable time following the request. 4. Any information or documentation obtained by the national public authorities or bodies referred to in paragraph 1 of this Article pursuant to this Article shall be treated in accordance with the confidentiality obligations set out in Article 78. |
第七十七条 基本权利保护机构的权力 1、 监管或履行相关欧盟法所规定义务,保护与本法附录三所述高风险人工智能系统使用相关基本权利(包括免受歧视的权利)的国家公权力机关,在其权力范围内为有效履行其职责确有必要的前提下,有权要求和获取按本法规定以无障碍语言和格式创建或维护的所有文档。相关公权力机关应当将上述要求通知相关成员国的市场监管机构。 2、 截至2024年11月2日,每个成员国都应当明确本条第1款所述公权力机关,并公布其名单。各成员国应当将其指定的公权力机构名单通知欧盟委员会和其他成员国,并应确保及时更新名单。 3、 如果根据本条第1款所述文档不足以确定是否发生违反基本权利保护相关欧盟法所规定义务的行为,第1款中所述公权力机关可以向市场监管机构提出合理申请,安排通过技术手段对高风险人工智能系统进行测试。市场监管机构应当在提出申请的公权力机关的密切参与下,在提出申请后的合理期间内组织测试。 4、 本条第1款项下国家公权力机关根据本条规定获取的所有信息和文档,都应当根据本法第七十八条规定的保密义务作保密处理。 |
Article 78 Confidentiality 1. The Commission, market surveillance authorities and notified bodies and any other natural or legal person involved in the application of this Regulation shall, in accordance with Union or national law, respect the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular: (a) the intellectual property rights and confidential business information or trade secrets of a natural or legal person, including source code, except in the cases referred to in Article 5 of Directive (EU) 2016/943 of the European Parliament and of the Council (57); (b) the effective implementation of this Regulation, in particular for the purposes of inspections, investigations or audits; (c) public and national security interests; (d) the conduct of criminal or administrative proceedings; (e) information classified pursuant to Union or national law. 2. The authorities involved in the application of this Regulation pursuant to paragraph 1 shall request only data that is strictly necessary for the assessment of the risk posed by AI systems and for the exercise of their powers in accordance with this Regulation and with Regulation (EU) 2019/1020. They shall put in place adequate and effective cybersecurity measures to protect the security and confidentiality of the information and data obtained, and shall delete the data collected as soon as it is no longer needed for the purpose for which it was obtained, in accordance with applicable Union or national law. 3. Without prejudice to paragraphs 1 and 2, information exchanged on a confidential basis between the national competent authorities or between national competent authorities and the Commission shall not be disclosed without prior consultation of the originating national competent authority and the deployer when high-risk AI systems referred to in point 1, 6 or 7 of Annex III are used by law enforcement, border control, immigration or asylum authorities and when such disclosure would jeopardise public and national security interests. This exchange of information shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities. When the law enforcement, immigration or asylum authorities are providers of high-risk AI systems referred to in point 1, 6 or 7 of Annex III, the technical documentation referred to in Annex IV shall remain within the premises of those authorities. Those authorities shall ensure that the market surveillance authorities referred to in Article 74(8) and (9), as applicable, can, upon request, immediately access the documentation or obtain a copy thereof. Only staff of the market surveillance authority holding the appropriate level of security clearance shall be allowed to access that documentation or any copy thereof. 4. Paragraphs 1, 2 and 3 shall not affect the rights or obligations of the Commission, Member States and their relevant authorities, as well as those of notified bodies, with regard to the exchange of information and the dissemination of warnings, including in the context of cross-border cooperation, nor shall they affect the obligations of the parties concerned to provide information under criminal law of the Member States. 5. The Commission and Member States may exchange, where necessary and in accordance with relevant provisions of international and trade agreements, confidential information with regulatory authorities of third countries with which they have concluded bilateral or multilateral confidentiality arrangements guaranteeing an adequate level of confidentiality. |
第七十八条 保密 1、欧盟委员会、市场监管机构和评定机构以及本法适用范围内的任何其他自然人或法人,均应当根据欧盟或成员国法律规定,确保在执行任务和活动时获得的信息和数据的保密性,尤其应当保护: (a)自然人或法人的知识产权和保密商业信息或商业秘密(包括源代码),但欧洲议会和欧盟理事会第2016/943号指令第5条所述情况除外; (b)本法的有效实施,特别是为检查、调查或审计目的; (c)公共安全和国家安全利益; (d)进行刑事或行政诉讼; (e)根据欧盟或成员国法律被归类为应保密的信息。 2、根据本条第1款规定参与本法适用的公权力机关仅应当要求提供其评估人工智能系统带来的风险并根据本法和第2019/1020号条例行使权力所必需的数据。他们应当采取充分有效的网络安全措施,保护所获得信息和数据的安全性和保密性,并应当根据适用的欧盟或成员法律规定,在其获取信息的目的不再有必要时立即删除所收集的数据。 3、在不影响本条第1款和第2款的情况下,当执法、边境管制、移民或庇护机关使用本法附录三第1条、第6条或第7条所述高风险人工智能系统,且披露信息会危及公共利益和国家安全利益时,未事先与信息源头国家主管机关和部署者事先协商,不得披露国家主管机关之间或其与欧盟委员会之间在保密基础上交换的信息。该等信息交换不应含有与执法、边境管制、移民或庇护当局之活动有关的敏感业务数据。 当执法、移民或庇护机关提供本法附录三第1条、第6条或第7条所述高风险人工智能系统时,本法附录四所规定的技术文档应当保存在上述机关的办公场所。上述机构应当确保本法第七十四条第8款和第9款项下市场监管机构(如适用)可以按要求立即访问文档或获取文档副本。仅持有适当水平安全许可的市场监管机构的工作人员才能访问该文件或其任何副本。 4、本条第1款、第2款和第3款不得影响欧盟委员会、各成员国及其有关主管机关和评定机构在交换信息和发布警告方面的权利或义务,包括在跨境合作时,也不得影响相关各方根据成员国刑法规定所承担提供信息的义务。 5、欧盟委员会和各成员国可以在必要时根据相关国际和贸易协议的约定,和已经与它们缔结双边或多边保密协议并保证充分保密的第三国监管机构交换机密信息。 |
Article 79 Procedure at national level for dealing with AI systems presenting a risk 1. AI systems presenting a risk shall be understood as a ‘product presenting a risk’ as defined in Article 3, point 19 of 2. Where the market surveillance authority of a Member State has sufficient reason to consider an AI system to present a risk as referred to in paragraph 1 of this Article, it shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. Particular attention shall be given to AI systems presenting a risk to vulnerable groups. Where risks to fundamental rights are identified, the market surveillance authority shall also inform and fully cooperate with the relevant national public authorities or bodies referred to in Article 77(1). The relevant operators shall cooperate as necessary with the market surveillance authority and with the other national public authorities or bodies referred to in Article 77(1). Where, in the course of that evaluation, the market surveillance authority or, where applicable the market surveillance authority in cooperation with the national public authority referred to in Article 77(1), finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without undue delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a period the market surveillance authority may prescribe, and in any event within the shorter of 15 working days, or as provided for in the relevant Union harmonisation legislation. The market surveillance authority shall inform the relevant notified body accordingly. Article 18 of Regulation (EU) 2019/1020 shall apply to the measures referred to in the second subparagraph of this paragraph. 3. Where the market surveillance authority considers that the non-compliance is not restricted to its national territory, it shall inform the Commission and the other Member States without undue delay of the results of the evaluation and of the actions which it has required the operator to take. 4. The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market. 5. Where the operator of an AI system does not take adequate corrective action within the period referred to in paragraph 2, the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI system’s being made available on its national market or put into service, to withdraw the product or the standalone AI system from that market or to recall it. That authority shall without undue delay notify the Commission and the other Member States of those measures. 6. The notification referred to in paragraph 5 shall include all available details, in particular the information necessary for the identification of the non-compliant AI system, the origin of the AI system and the supply chain, the nature of the non-compliance alleged and the risk involved, the nature and duration of the national measures taken and the arguments put forward by the relevant operator. In particular, the market surveillance authorities shall indicate whether the non-compliance is due to one or more of the following: (a) non-compliance with the prohibition of the AI practices referred to in Article 5; (b) a failure of a high-risk AI system to meet requirements set out in Chapter III, Section 2; (c) shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 conferring a presumption of conformity; (d) non-compliance with Article 50. 7. The market surveillance authorities other than the market surveillance authority of the Member State initiating the procedure shall, without undue delay, inform the Commission and the other Member States of any measures adopted and of any additional information at their disposal relating to the non-compliance of the AI system concerned, and, in the event of disagreement with the notified national measure, of their objections. 8. Where, within three months of receipt of the notification referred to in paragraph 5 of this Article, no objection has been raised by either a market surveillance authority of a Member State or by the Commission in respect of a provisional measure taken by a market surveillance authority of another Member State, that measure shall be deemed justified. This shall be without prejudice to the procedural rights of the concerned operator in accordance with Article 18 of Regulation (EU) 2019/1020. The three-month period referred to in this paragraph shall be reduced to 30 days in the event of non-compliance with the prohibition of the AI practices referred to in Article 5 of this Regulation. 9. The market surveillance authorities shall ensure that appropriate restrictive measures are taken in respect of the product or the AI system concerned, such as withdrawal of the product or the AI system from their market, without undue delay. |
第七十九条 成员国层面处理存在风险的人工智能系统的程序 1、 对人的健康、安全或基本权利带来风险的存在风险的人工智能系统应当被理解为第2019/1020号条例第3条第19项定义的“存在风险的产品”。 2、 如果成员国的市场监管机构有充足的理由认为人工智能系统存在本条第1款所述风险,应当评估该人工智能系统是否符合本法规定的所有要求和义务。并应特别关注对弱势群体构成风险的人工智能系统。如果发现基本权利面临风险,市场监管机构还应当通知本法第七十七条第1款项下相关国家公权力机关,并与之充分合作。相关经营者应当在必要时与市场监管机构和本法第七十七条第1项下其他国家公权力机关合作。 在评估过程中,如果市场监管机构自行或与本法第七十七条第1款项下国家公权力机构共同(如适用)发现人工智能系统不符合本法规定的要求和义务,应当尽快(不得晚于15个工作日内)要求相关运营方采取一切适当的纠正措施,使人工智能系统符合要求、将人工智能系统从市场上撤回,或这在市场监管机构规定的期限内召回,或按照相关欧盟统一立法的规定处理。 3、 如果市场监管机构认为违规行为不限于其本国领土范围内,应当尽快将评估结果和要求经营者采取的措施通知欧盟委员会和其他成员国。 4、 运营者应当确保对其在欧盟市场上供应的所有相关人工智能系统采取一切适当的纠正措施。 5、 如果人工智能系统的运营者未在本条第2款规定的期限内采取相应纠正措施,市场监管机构应当采取一切适当的临时措施,禁止或限制相关人工智能系统在其国内市场上上市或投入使用,从其国内市场上下架产品或独立人工智能系统,或者召回该产品或独立人工智能系统。市场监管机构应当尽快将其所采取的措施通知欧盟委员会和其他成员国。 6、 本条第5款项下的通知应当包括所有可用的详细信息,特别是识别违规人工智能系统所需的信息、人工智能系统和供应链的来源、所指控的违规行为的性质和所涉及的风险、所采取的措施的性质和持续时间以及相关运营者提出的意见。市场监管机构尤其应当说明违规行为是否因以下一种或多种原因所引起: (a)未遵守本法第五条关于禁止性人工智能活动的规定; (b)高风险人工智能系统未能满足本法第三章第二节规定的要求; (c)本法第四十条和第四十一条所规定的统一标准或通用规范的缺陷造成符合性推定; (d)未遵守本法第五十条。 7、 除启动程序的成员国的市场监督机构外,其他市场监督机构应毫不拖延地通知委员会和其他成员国所采取的任何措施,以及他们所掌握的与相关人工智能系统不合规有关的任何其他信息,如果不同意所通知的国家措施,还应告知他们的反对意见。 8、 成员国市场监管机构或欧盟委员会收到本条第5款项下通知后的三个月内均未对其他成员国市场监督机构采取的临时措施提出异议,该措施应当被视为合理。根据第2019/1020号条例第十八条,上述规定不应损害相关运营者的程序权利。如果涉及未遵守本法第五条项下人工智能活动禁令,本款所述的三个月期限应当缩短至30天。 9、 市场监管机构应当确保对相关产品或人工智能系统采取适当的限制措施,例如尽快将产品或人工智能系统撤出相应市场。 |
Article 80 Procedure for dealing with AI systems classified by the provider as non-high-risk in application of Annex III 1. Where a market surveillance authority has sufficient reason to consider that an AI system classified by the provider as non-high-risk pursuant to Article 6(3) is indeed high-risk, the market surveillance authority shall carry out an evaluation of the AI system concerned in respect of its classification as a high-risk AI system based on the conditions set out in Article 6(3) and the Commission guidelines. 2. Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this Regulation, as well as take appropriate corrective action within a period the market surveillance authority may prescribe. 3. Where the market surveillance authority considers that the use of the AI system concerned is not restricted to its national territory, it shall inform the Commission and the other Member States without undue delay of the results of the evaluation and of the actions which it has required the provider to take. 4. The provider shall ensure that all necessary action is taken to bring the AI system into compliance with the requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does not bring the AI system into compliance with those requirements and obligations within the period referred to in paragraph 2 of this Article, the provider shall be subject to fines in accordance with Article 99. 5. The provider shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market. 6. Where the provider of the AI system concerned does not take adequate corrective action within the period referred to in paragraph 2 of this Article, Article 79(5) to (9) shall apply. 7. Where, in the course of the evaluation pursuant to paragraph 1 of this Article, the market surveillance authority establishes that the AI system was misclassified by the provider as non-high-risk in order to circumvent the application of requirements in Chapter III, Section 2, the provider shall be subject to fines in accordance with Article 99. 8. In exercising their power to monitor the application of this Article, and in accordance with Article 11 of Regulation (EU) 2019/1020, market surveillance authorities may perform appropriate checks, taking into account in particular information stored in the EU database referred to in Article 71 of this Regulation. |
第八十条 适用附录三处理被提供者归类为非高风险的人工智能系统的程序 1、 市场监管机构有充足理由认为提供者根据本法第六条第3款归类为非高风险的人工智能系统确有高风险,市场监管机构应当根据本法第6条第3款和欧盟委员会指导方针所规定的条件,对相关人工智能系统被归类为高风险人工智能系统事宜进行评估。 2、 评估过程中,如果市场监管机构发现相关人工智能系统具有高风险,应当尽快要求相关提供者采取一切必要措施使人工智能系统符合本法规定的要求和义务,并在市场监管机构规定的期限内采取适当的纠正措施。 3、 如果市场监管机构认为相关人工智能系统的不只在其本国领土范围内使用,应当尽快将评估结果和要求提供者采取的措施通知欧盟委员会和其他成员国。 4、 提供者应当确保采取一切必要措施,使人工智能系统符合本法规定的要求和义务。如果相关人工智能系统的提供者没有在本条第2款所规定的期限内使人工智能系统符合该等要求和义务,则应当根据本法第九十九条规定对提供者处以罚款。 5、 提供者应当确保对其在欧盟市场上供应的所有相关人工智能系统采取一切适当的纠正措施。 6、 如果相关人工智能系统的提供者未在本条第2款规定的期限内采取充足的纠正措施,应当适用本法第七十九条第5款至第9款规定。 7、 市场监管机构根据本条第1款进行评估的过程中,确定相关人工智能系统被提供者错误地归类为非高风险,以规避本法第三章第二节要求的,应当根据本法第九十九条规定对提供者处以罚款。 8、 市场监管机构行使权力监督本条的适用情况时,根据第2019/1020号条例第11条规定,可以进行适当的检查,尤其是检查本法第七十一条所述的欧盟数据库中存储的信息。 |
Article 81 Union safeguard procedure 1. Where, within three months of receipt of the notification referred to in Article 79(5), or within 30 days in the case of non-compliance with the prohibition of the AI practices referred to in Article 5, objections are raised by the market surveillance authority of a Member State to a measure taken by another market surveillance authority, or where the Commission considers the measure to be contrary to Union law, the Commission shall without undue delay enter into consultation with the market surveillance authority of the relevant Member State and the operator or operators, and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall, within six months, or within 60 days in the case of non-compliance with the prohibition of the AI practices referred to in Article 5, starting from the notification referred to in Article 79(5), decide whether the national measure is justified and shall notify its decision to the market surveillance authority of the Member State concerned. The Commission shall also inform all other market surveillance authorities of its decision. 2. Where the Commission considers the measure taken by the relevant Member State to be justified, all Member States shall ensure that they take appropriate restrictive measures in respect of the AI system concerned, such as requiring the withdrawal of the AI system from their market without undue delay, and shall inform the Commission accordingly. Where the Commission considers the national measure to be unjustified, the Member State concerned shall withdraw the measure and shall inform the Commission accordingly. 3. Where the national measure is considered justified and the non-compliance of the AI system is attributed to shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 of this Regulation, the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012. |
第八十一条 欧盟保障程序 1、 成员国的市场监管机构在收到本法第七十九条第5款项下通知后三个月内(在未遵守本法第五条项下人工智能活动禁令的情况下,在30天内),对另一市场监管机构采取的措施提出异议,或者欧盟委员会认为该等措施违反欧盟法,欧盟委员会应当尽快与相关成员国市场监管机构和经营者协商,并对相关措施进行评估。根据评估结果,欧盟委员会应当在六个月内(在未不遵守本法第五条所述人工智能活动禁令的情况下,自第七十九条第5款所述通知之日起60天内)决定上述措施是否合理,并将其决定通知有关成员国的市场监管机构。欧盟委员会还应当将其决定告知所有其他市场监管机构。 2、 如果欧盟委员会认为相关成员国采取的措施合理,所有成员国都应当确保对相应人工智能系统采取适当的限制措施,例如要求该等人工智能系统退出其市场,并应当相应通知欧盟委员会。如果欧盟委员会认为相关成员国采取的措施不合理,相关成员国应当撤销该项措施,并相应通知欧盟委员会。 3、 如果认为相关成员国采取的措施合理,并且人工智能系统的违规是由于本法第四十条和第四十一条所述统一标准或通用规范的缺陷所造成,欧盟委员会应当适用第1025/2012号条例第11条规定的程序。 |
Article 82 Compliant AI systems which present a risk 1. Where, having performed an evaluation under Article 79, after consulting the relevant national public authority referred to in Article 77(1), the market surveillance authority of a Member State finds that although a high-risk AI system complies with this Regulation, it nevertheless presents a risk to the health or safety of persons, to fundamental rights, or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk without undue delay, within a period it may prescribe. 2. The provider or other relevant operator shall ensure that corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market within the timeline prescribed by the market surveillance authority of the Member State referred to in paragraph 1. 3. The Member States shall immediately inform the Commission and the other Member States of a finding under paragraph 1. That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken. 4. The Commission shall without undue delay enter into consultation with the Member States concerned and the relevant operators, and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Commission shall decide whether the measure is justified and, where necessary, propose other appropriate measures. 5. The Commission shall immediately communicate its decision to the Member States concerned and to the relevant operators. It shall also inform the other Member States. |
第八十二条 合规但存在风险的人工智能系统 1、 如果成员国市场监管机构根据本法第七十九条规定评估,并征求本法第七十七条第1款项下相关国家公权力机关的意见后,发现尽管高风险人工智能系统符合本法规定,但仍然对人的健康或安全、基本权利或公共利益保护的其他方面构成风险,应当要求相关运营者在规定的期限内尽快采取一切适当措施,确保相应人工智能系统在被投放到市场或投入使用时不再构成风险。 2、 提供者或其他相关运营者应当确保在本条第1款所述成员国市场监管机构规定的时间内,对其在欧盟市场上供应的所有相关人工智能系统采取纠正措施。 3、 成员国应当立即将本条第1款规定的调查结果通知欧盟委员会和其他成员国。通知内容应当包括所有可用的细节,特别是识别相关人工智能系统所需的数据、人工智能系统的来源和供应链、所涉风险的性质以及所采取的行政措施的性质和持续时间。 4、 欧盟委员会应当尽快与相关成员国和相关运营方协商,并评估所采取的行政措施。根据评估结果决定该等措施是否合理,并在必要时提出其他适当措施。 5、 欧盟委员会应当立即将其决定通知相关成员国和相关经营者。也应当通知其他成员国。 |
Article 83 Formal non-compliance 1. Where the market surveillance authority of a Member State makes one of the following findings, it shall require the relevant provider to put an end to the non-compliance concerned, within a period it may prescribe: (a) the CE marking has been affixed in violation of Article 48; (b) the CE marking has not been affixed; (c)the EU declaration of conformity referred to in Article 47 has not been drawn up; (d) the EU declaration of conformity referred to in Article 47 has not been drawn up correctly; (e) the registration in the EU database referred to in Article 71 has not been carried out; (f) where applicable, no authorised representative has been appointed; (g) technical documentation is not available. 2. Where the non-compliance referred to in paragraph 1 persists, the market surveillance authority of the Member State concerned shall take appropriate and proportionate measures to restrict or prohibit the high-risk AI system being made available on the market or to ensure that it is recalled or withdrawn from the market without delay. |
第八十三条 正式违规 1、如果成员国的市场监管机构调查认为存在以下任一情形,应当要求相关提供者在规定的期限内终止相关违规行为: (a)CE标识的粘贴违反本法第四十八条; (b)未加贴CE标识; (c)尚未起草本法第四十七条所规定的欧盟符合性声明; (d)未正确起草本法第四十七条所规定的欧盟符合性声明; (e)尚未根据本法第七十一条规定在欧盟数据库中登记; (f)尚未任命授权代表(如适用); (g)技术文档不可用。 2、如果持续存在本条第1款所规定的违规行为,相关成员国的市场监管机构应采取相应的适当措施,限制或禁止相关高风险人工智能系统在被投放到市场;如果已经被投放到市场,应当确保尽快召回或撤回。 |
Article 84 Union AI testing support structures 1. The Commission shall designate one or more Union AI testing support structures to perform the tasks listed under Article 21(6) of Regulation (EU) 2019/1020 in the area of AI. 2. Without prejudice to the tasks referred to in paragraph 1, Union AI testing support structures shall also provide independent technical or scientific advice at the request of the Board, the Commission, or of market surveillance authorities. |
第八十四条 欧盟的人工智能测试支持结构 1、 欧盟委员会应当指定一个或多个欧盟人工智能测试支持机构,在人工智能领域实施第2019/1020号条例第21条第6款所规定的任务。 2、 在不影响本条第1款所述任务的情况下,欧盟人工智能测试支持结构应当根据人工智能委员会、欧盟委员会或市场监管机构的要求提供独立的技术或科学建议。 |
SECTION 4 Remedies |
第四节 救济 |
Article 85 Right to lodge a complaint with a market surveillance authority Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation may submit complaints to the relevant market surveillance authority. In accordance with Regulation (EU) 2019/1020, such complaints shall be taken into account for the purpose of conducting market surveillance activities, and shall be handled in line with the dedicated procedures established therefor by the market surveillance authorities. |
第八十五条 向市场监管机构投诉的权利 在不影响其他行政或司法救济措施的情况下,有理由认为存在违反本法规定情形的自然人或法人均可向相关市场监管机构投诉。 根据第2019/1020号条例规定,在开展市场监管活动时应当考虑上述投诉,并应当按照市场监管机构为此制定的专门程序进行处理。 |
Article 86 Right to explanation of individual decision-making 1. Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III, with the exception of systems listed under point 2 thereof, and which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken. 2. Paragraph 1 shall not apply to the use of AI systems for which exceptions from, or restrictions to, the obligation under that paragraph follow from Union or national law in compliance with Union law. 3. This Article shall apply only to the extent that the right referred to in paragraph 1 is not otherwise provided for under Union law. |
第八十六条 个人决策的解释权 1、 部署者根据本法附录三项下高风险人工智能系统(附录三第2条项下系统除外)的输出作出的决策如果产生法律效力或对他人产生类似重大影响,并对其健康、安全或基本权利产生不利影响,受该决策影响的主体有权从部署者处获得关于人工智能系统其决策程序中所发挥作用和所作决策的主要内容的明确且有价值的解释。 2、 本条第1款不适用于根据欧盟法律或符合欧盟法的成员国法律对该款规定的义务作出例外或限制性规定的人工智能系统的使用。 3、 仅当本条第1款所述权利在欧盟法律中未有另行规定时,适用本条第1款规定。 |
Article 87 Reporting of infringements and protection of reporting persons Directive (EU) 2019/1937 shall apply to the reporting of infringements of this Regulation and the protection of persons reporting such infringements. |
第八十七条 举报侵权行为和举报人保护 第2019/1937号指令应当适用于举报违反本法规定的行为和保护此类侵权行为的举报人。 |
SECTION 5 Supervision, investigation, enforcement and monitoring in respect of providers of general-purpose AI models |
第五节 对通用人工智能模型提供者的监管、调查、执法和监测 |
Article 88 Enforcement of the obligations of providers of general-purpose AI models 1. The Commission shall have exclusive powers to supervise and enforce Chapter V, taking into account the procedural guarantees under Article 94. The Commission shall entrust the implementation of these tasks to the AI Office, without prejudice to the powers of organisation of the Commission and the division of competences between Member States and the Union based on the Treaties. 2. Without prejudice to Article 75(3), market surveillance authorities may request the Commission to exercise the powers laid down in this Section, where that is necessary and proportionate to assist with the fulfilment of their tasks under this Regulation. |
第八十八条 通用人工智能模型提供者义务的履行 1、 考虑到本法第九十四条规定的程序,欧盟委员会应当拥有监督和执行本法第五章规定的专属权力。欧盟委员会应当委托人工智能办公室执行该等任务,但不得损害欧盟委员会的组织权力以及成员国与欧盟之间按条约划分的权限。 2、 在不影响本法第七十五条第3款规定的情况下,对于协助市场监管机构完成本法规定的任务确有必要且适当时,市场监管机构可以请求欧盟委员会行使本节规定的权力。 |
Article 89 Monitoring actions 1. For the purpose of carrying out the tasks assigned to it under this Section, the AI Office may take the necessary actions to monitor the effective implementation and compliance with this Regulation by providers of general-purpose AI models, including their adherence to approved codes of practice. 2. Downstream providers shall have the right to lodge a complaint alleging an infringement of this Regulation. A complaint shall be duly reasoned and indicate at least: (a) the point of contact of the provider of the general-purpose AI model concerned; (b) a description of the relevant facts, the provisions of this Regulation concerned, and the reason why the downstream provider considers that the provider of the general-purpose AI model concerned infringed this Regulation; (c)any other information that the downstream provider that sent the request considers relevant, including, where appropriate, information gathered on its own initiative. |
第八十九条 监测措施 1、 为执行本节规定的任务,人工智能办公室可以采取必要行动,监测通用人工智能模型提供者有效实施和遵守本法的情况,包括其遵守经批准的业务守则的情况。 2、 下游提供者有权就违反本法规定的行为提出投诉。投诉应当充分说明理由,并至少说明以下情况: (a)相关通用人工智能模型提供者的联络点; (b)相关事实描述、本法的相关规定,以及下游提供者认为相关通用人工智能模型提供者违反本法的原因; (c)提交请求的下游提供者认为有关的其他信息,包括在适当的情况下主动收集的信息。 |
Article 90 Alerts of systemic risks by the scientific panel 1. The scientific panel may provide a qualified alert to the AI Office where it has reason to suspect that: (a) a general-purpose AI model poses concrete identifiable risk at Union level; or (b) a general-purpose AI model meets the conditions referred to in Article 51. 2. Upon such qualified alert, the Commission, through the AI Office and after having informed the Board, may exercise the powers laid down in this Section for the purpose of assessing the matter. The AI Office shall inform the Board of any measure according to Articles 91 to 94. 3. A qualified alert shall be duly reasoned and indicate at least: (a) the point of contact of the provider of the general-purpose AI model with systemic risk concerned; (b) a description of the relevant facts and the reasons for the alert by the scientific panel; (c)any other information that the scientific panel considers to be relevant, including, where appropriate, information gathered on its own initiative. |
第九十条 科学小组对系统性风险的警示 1、科学小组有理由怀疑存在下列情形的,可以向人工智能办公室发出合格警示: (a)通用人工智能模型在欧盟层面带来具体的可识别风险;或 (b)通用人工智能模型符合本法第五十一条规定的条件。 2、收到上述合格警示后,欧盟委员会可以在通知人工智能委员会后通过人工智能办公室行使本节规定的权力,评估相关事宜。人工智能办公室应当根据本法第九十一条至第九十四条规定,将所采取的措施通知人工智能委员会。 3、合格警示应当充分说明理由,并至少表明以下信息: (a)通用人工智能模型提供者与相关系统风险的联结点; (b)科学小组对相关事项和警示原因的描述; (c)科学小组认为有关的任何其他信息,包括在适当情况下主动收集的信息。 |
Article 91 Power to request documentation and information 1. The Commission may request the provider of the general-purpose AI model concerned to provide the documentation drawn up by the provider in accordance with Articles 53 and 55, or any additional information that is necessary for the purpose of assessing compliance of the provider with this Regulation. 2. Before sending the request for information, the AI Office may initiate astructured dialogue with the provider of the general-purpose AI model. 3. Upon a duly substantiated request from the scientific panel, the Commission may issue a request for information to a provider of a general-purpose AI model, where the access to information is necessary and proportionate for the fulfilment of the tasks of the scientific panel under Article 68(2). 4. The request for information shall state the legal basis and the purpose of the request, specify what information is required, set a period within which the information is to be provided, and indicate the fines provided for in Article 101 for supplying incorrect, incomplete or misleading information. 5. The provider of the general-purpose AI model concerned, or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall supply the information requested on behalf of the provider of the general-purpose AI model concerned. Lawyers duly authorised to act may supply information on behalf of their clients. The clients shall nevertheless remain fully responsible if the information supplied is incomplete, incorrect or misleading. |
第九十一条 要求提供文档和信息的权力 1、 欧盟委员会可以要求相关通用人工智能模型的提供者提供其根据本法第五十三条和第五十五条规定编制的文档,或为评估提供者遵守本法的情况所需的任何其他信息。 2、 在提出信息要求之前,人工智能办公室可以与通用人工智能模型的提供者开启结构化对话。 3、 获取相关信息对于科学小组根据本法第六十八条第2款规定履行职责确有必要且适当的情况下,在科学小组提出依据充分的请求后,欧盟委员会可以向通用人工智能模型的提供者发出信息要求。 4、 信息要求应当说明提出该要求的法律依据和目的,具体说明所需的信息,指定提供信息的期限,并说明本法第一百零一条规定的提供错误、不完整或误导性信息时面临的罚款。 5、 相关通用人工智能模型的提供者或其代表应当提供欧盟委员会所要求的信息。对于法人、公司或企业,或者不具备法人资格的提供者,法律或其组织章程规定有权代表该组织的主体应当代表相应通用人工智能模型的提供者按欧盟委员会的要求提供信息。经正式授权的律师可以代表其客户提供信息。但如果提供的信息不完整、错误或具有误导性,其客户应当承担全部责任。 |
Article 92 Power to conduct evaluations 1. The AI Office, after consulting the Board, may conduct evaluations of the general-purpose AI model concerned: (a) to assess compliance of the provider with obligations under this Regulation, where the information gathered pursuant to Article 91 is insufficient; or (b) to investigate systemic risks at Union level of general-purpose AI models with systemic risk, in particular following a qualified alert from the scientific panel in accordance with Article 90(1), point (a). 2. The Commission may decide to appoint independent experts to carry out evaluations on its behalf, including from the scientific panel established pursuant to Article 68. Independent experts appointed for this task shall meet the criteria outlined in Article 68(2). 3. For the purposes of paragraph 1, the Commission may request access to the general-purpose AI model concerned through APIs or further appropriate technical means and tools, including source code. 4. The request for access shall state the legal basis, the purpose and reasons of the request and set the period within which the access is to be provided, and the fines provided for in Article 101 for failure to provide access. 5. The providers of the general-purpose AI model concerned or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall provide the access requested on behalf of the provider of the general-purpose AI model concerned. 6. The Commission shall adopt implementing acts setting out the detailed arrangements and the conditions for the evaluations, including the detailed arrangements for involving independent experts, and the procedure for the selection thereof. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2). 7. Prior to requesting access to the general-purpose AI model concerned, the AI Office may initiate a structured dialogue with the provider of the general-purpose AI model to gather more information on the internal testing of the model, internal safeguards for preventing systemic risks, and other internal procedures and measures the provider has taken to mitigate such risks. |
第九十二条 评估的权力 1、人工智能办公室在征求人工智能委员会的意见后,可以对通用人工智能模型相关的下列事项进行评估: (a)在根据本法第九十一条规定收集的信息不充足的情况下,评估提供者是否遵守本法规定的义务;或 (b)在欧盟层面调查具有系统性风险的通用人工智能模型的系统性风险,特别是在科学小组根据本法第九十条第1款(a)项规定发出合格警示后。 2、欧盟委员会可以决定任命独立专家代表其进行评估,包括根据本法第六十八条规定设立科学小组的专家。为此任命的独立专家应当符合本法第六十八条第2款规定的标准。 3、为本条第1款之目的,欧盟委员会可以要求通过API或其他适当的技术手段和工具(包括源代码)访问相关的通用人工智能模型。 4、访问要求应当说明法律依据、访问目的和原因,并规定提供访问权限的期限,以及本法第一百零一条规定的未提供访问时面临的罚款。 5、相关通用人工智能模型的提供者或其代表应当按要求提供信息。 对于法人、公司或企业,或者不具备法人资格的提供者,法律或其组织章程规定有权代表该组织的主体应当代表相应通用人工智能模型的提供者按要求提供访问权限。 6、欧盟委员会应当通过制定实施细则,规定评估的详细安排和条件,包括独立专家参与的详细安排及其遴选程序。该等实施细则应当按照本法第九十八条第2款规定的审查程序经审查通过。 7、在要求访问相关通用人工智能模型之前,人工智能办公室可以与通用人工智能模型的提供者进行结构化对话,以收集涉及模型的内部测试、预防系统性风险的内部保障措施以及提供者为降低此类风险所采取的其他内部程序和措施的更多信息。 |
Article 93 Power to request measures 1. Where necessary and appropriate, the Commission may request providers to: (a)take appropriate measures to comply with the obligations set out in Articles 53 and 54; (b) implement mitigation measures, where the evaluation carried out in accordance with Article 92 has given rise to serious and substantiated concern of a systemic risk at Union level; (c) restrict the making available on the market,withdraw or recall the model. 2. Before a measure is requested, the AI Office may initiate a structured dialogue with the provider of the general-purpose AI model. 3. If, during the structured dialogue referred to in paragraph 2, the provider of the general-purpose AI model with systemic risk offers commitments to implement mitigation measures to address a systemic risk at Union level, the Commission may, by decision, make those commitments binding and declare that there are no further grounds for action. |
第九十三条 要求采取措施的权力 1、在必要且适当的情况下,欧盟委员会可向提供者提出如下要求: (a)采取适当措施履行本法第五十三条和第五十四条规定的义务; (b)如果根据本法第九十二条规定进行的评估引发其对欧盟层面系统性风险的严重且确切的担忧,(要求提供者)采取缓解措施; (c)限制市场供应,撤回或召回相应型号(的人工智能系统或产品)。 2、在要求(通用人工智能模型的提供者)采取措施之前,人工智能办公室可以与提供者进行结构化对话。 3、如果在本条第2款所规定的结构化对话中,具有系统性风险的通用人工智能模型的提供者承诺采取缓解措施解决欧盟层面的系统性风险,欧盟委员会可以据此作出相应决定使该等承诺具有约束力,并宣布无采取进一步行动的理由。 |
Article 94 Procedural rights of economic operators of the general-purpose AI model Article 18 of Regulation (EU) 2019/1020 shall apply mutatis mutandis to the providers of the general-purpose AI model, without prejudice to more specific procedural rights provided for in this Regulation. |
第九十四条 通用人工智能模型商业运营者的程序性权利 第2019/1020号条例第18条应当参照适用于通用人工智能模型的提供者,但不影响本法规定的更具体的程序性权利。 |
CHAPTER X CODES OF CONDUCT AND GUIDELINES |
第十章 业务守则和指导方针 |
Article 95 Codes of conduct for voluntary application of specific requirements 1. The AI Office and the Member States shall encourage and facilitate the drawing up of codes of conduct, including related governance mechanisms, intended to foster the voluntary application to AI systems, other than high-risk AI systems, of some or all of the requirements set out in Chapter III, Section 2 taking into account the available technical solutions and industry best practices allowing for the application of such requirements. 2. The AI Office and the Member States shall facilitate the drawing up of codes of conduct concerning the voluntary application, including by deployers, of specific requirements to all AI systems, on the basis of clear objectives and key performance indicators to measure the achievement of those objectives, including elements such as, but not limited to: (a) applicable elements provided for in Union ethical guidelines for trustworthy AI; (b) assessing and minimising the impact of AI systems on environmental sustainability, including as regards energy-efficient programming and techniques for the efficient design, training and use of AI; (c) promoting AI literacy, in particular that of persons dealing with the development, operation and use of AI; (d) facilitating an inclusive and diverse design of AI systems, including through the establishment of inclusive and diverse development teams and the promotion of stakeholders’ participation in that process; (e) assessing and preventing the negative impact of AI systems on vulnerable persons or groups of vulnerable persons, including as regards accessibility for persons with a disability, as well as on gender equality. 3. Codes of conduct may be drawn up by individual providers or deployers of AI systems or by organisations representing them or by both, including with the involvement of any interested stakeholders and their representative organisations, including civil society organisations and academia. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems. 4. The AI Office and the Member States shall take into account the specific interests and needs of SMEs, including start-ups, when encouraging and facilitating the drawing up of codes of conduct. |
第九十五条 自愿适用具体要求的业务守则 1、 人工智能办公室和成员国应当考虑支持适用本法第三章第二节所规定部分或全部要求的现有技术解决方案和行业最佳实践,鼓励和推动制定业务守则(包括建立相关治理机制),以促进本法第三章第二节规定的部分或全部要求在人工智能系统(高风险人工智能系统除外)中的自愿适用。 2、 人工智能办公室和成员国应当根据具体的目标和衡量该等目标是否实现的关键绩效指标(包括但不限于以下要素),推动制定关于所有人工智能系统自愿适用(包括部署者自愿适用)具体要求的业务守则: (a)欧盟关于可信赖人工智能的伦理准则中规定的适用要素; (b)评估并尽量减少人工智能系统对环境可持续性的影响,包括有关节能规划和高效设计、训练和使用人工智能的技术; (c)提高人工智能素养,特别是人工智能开发、操作和使用人员的素养; (d)人工智能系统的包容性和多样化设计,包括通过建立包容性和多样化的开发团队,促进利益相关者参与该过程; (e)评估并预防人工智能系统对弱势者或弱势群体的不利影响,包括残障人士的无障碍环境以及性别平等。 3、业务守则可以由人工智能系统的个人提供者或部署者自行或代表其所在组织亦或两者共同制定,并在所有利益相关者及其所代表组织(包括民间社会组织和学术界)的参与下制定。考虑到相关系统预期目的的相似性,业务守则可能涵盖一个或多个人工智能系统。 4、人工智能办公室和成员国在鼓励和推动业务守则的制定时,应当考虑到中小企业(包括初创企业)的具体利益和需求。 |
Article 96 Guidelines from the Commission on the implementation of this Regulation 1. The Commission shall develop guidelines on the practical implementation of this Regulation, and in particular on: (a) the application of the requirements and obligations referred to in Articles 8 to 15 and in Article 25; (b) the prohibited practices referred to in Article 5; (c) the practical implementation of the provisions related to substantial modification; (d) the practical implementation of transparency obligations laid down in Article 50; (e) detailed information on the relationship of this Regulation with the Union harmonisation legislation listed in Annex I, as well as with other relevant Union law, including as regards consistency in their enforcement; (f) the application of the definition of an AI system as set out in Article 3, point (1). When issuing such guidelines, the Commission shall pay particular attention to the needs of SMEs including start-ups, of local public authorities and of the sectors most likely to be affected by this Regulation. The guidelines referred to in the first subparagraph of this paragraph shall take due account of the generally acknowledged state of the art on AI, as well as of relevant harmonised standards and common specifications that are referred to in Articles 40 and 41, or of those harmonised standards or technical specifications that are set out pursuant to Union harmonisation law. 2. At the request of the Member States or the AI Office, or on its own initiative, the Commission shall update guidelines previously adopted when deemed necessary. |
第九十六条 欧盟委员会关于实施本法的指导方针 1、欧盟委员会应当制定关于实施本法的指导方针,特别是涉及的如下事项: (a)本法第八条至第十五条和第二十五条所规定要求和义务的适用; (b)本法第五条所规定的禁止类活动; (c)实质性修改相关规定的实际执行情况; (d)实际履行第五十条规定的透明度义务; (e)关于本法与附录一所载欧盟统一立法以及其他相关欧盟法律之间关系的详细信息,包括其执行的一致性; (f)第三条第1款项下人工智能系统定义的应用。 在发布该等指导方针时,欧盟委员会应当特别关注中小企业(包括初创企业)、地方公权力机关和最有可能受本法影响的行业的需求。 本款第一项中规定的指导方针应当适当考虑公认的人工智能技术现状,以及第四十条和第四十一条项下相关统一标准和通用规范,或根据欧盟统一法制定的统一标准或技术规范。 2、欧盟委员会应当主动或根据成员国或人工智能办公室的要求,在必要时更新已经制定的指导方针。 |
CHAPTER XI DELEGATION OF POWER AND COMMITTEE PROCEDURE |
第十一章 行政授权和欧盟委员会程序 |
Article 97 Exercise of the delegation 1. The power to adopt delegated acts is conferred on the Commission subject to the conditions laid down in this Article. 2. The power to adopt delegated acts referred to in Article 6(6) and (7), Article 7(1) and (3), Article 11(3), Article 43(5) and (6), Article 47(5), Article 51(3), Article 52(4) and Article 53(5) and (6) shall be conferred on the Commission for a period of five years from 1 August 2024. The Commission shall draw up a report in respect of the delegation of power not later than nine months before the end of the five-year period. The delegation of power shall be tacitly extended for periods of an identical duration, unless the European Parliament or the Council opposes such extension not later than three months before the end of each period. 3. The delegation of power referred to in Article 6(6) and (7), Article 7(1) and (3), Article 11(3), Article 43(5) and (6), Article 47(5), Article 51(3), Article 52(4) and Article 53(5) and (6) may be revoked at any time by the European Parliament or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force. 4. Before adopting a delegated act, the Commission shall consult experts designated by each Member State in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making. 5. As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council. 6. Any delegated act adopted pursuant to Article 6(6) or (7), Article 7(1) or (3), Article 11(3), Article 43(5) or (6), Article 47(5), Article 51(3), Article 52(4) or Article 53(5) or (6) shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council. |
第九十七条 授权的行使 1、根据本条规定的条件,欧盟委员会有权制定实施条例。 2、授权欧盟委员会制定本法第六条第6款和第7款、第七条第1款和第3款、第十一条第3款、第四十三条第5款和第六款、第四十七条第5款、第五十一条第3款、第五十二条第4款和第五十三条第5款和第6款提及的实施条例,授权期限自2024年8月1日起五年。欧盟委员会应当在具体五年期限届满至少九个月前起草一份关于行政授权的报告。除欧洲议会或欧盟理事会在授权期满前三个月内提出反对意见外,授权期限届满后自动延长五年。 3、欧洲议会和欧盟理事会可随时撤销本法第6条第6款和第7款、第七条第1款和第3款、第十一条第3款、第四十三条第5款和第6款、第四十七条第五款、第五十一条第3款、第五十二条第4款和第五十三条第5款和第6款项下授权。撤销决定应当终止决定中列明的授权。该等决定自其在《欧洲联盟公报》上公布之次日或决定规定的更晚生效日期生效。上述决定不应影响任何已生效的授权行为的有效性。 4、根据授权制定规章条例之前,欧盟委员会应当根据2016年4月13日《关于改进立法的机构间协议》所规定的原则,征询每个成员国指定的专家的意见。 5、欧盟委员会制定实施条例后,应当同时通知欧洲议会和欧盟理事会。 |
Article 98 Committee procedure 1. The Commission shall be assisted by a committee. That committee shall be a committee within the meaning of Regulation (EU) No 182/2011. 2. Where reference is made to this paragraph, Article 5 of Regulation (EU) No 182/2011 shall apply. |
第九十八条 欧盟委员会程序 1、 欧盟委员会应当由一个委员会协助。该委员会应当是第182/2011号条例项下委员会。 2、 援引本款时,应当适用第182/2011号条例第5条。 |
CHAPTER XII PENALTIES |
第十二章 罚则 |
Article 99 Penalties 1. In accordance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties and other enforcement measures,which may also include warnings and non-monetary measures, applicable to infringements of this Regulation by operators, and shall take all measures necessary to ensure that they are properly and effectively implemented, thereby taking into account the guidelines issued by the Commission pursuant to Article 96.The penalties provided for shall be effective, proportionate and dissuasive. They shall take into account the interests of SMEs, including start-ups, and their economic viability. 2. The Member States shall, without delay and at the latest by the date of entry into application, notify the Commission of the rules on penalties and of other enforcement measures referred to in paragraph 1, and shall notify it, without delay, of any subsequent amendment to them. 3. Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to EUR 35 000 000 or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. 4. Non-compliance with any of the following provisions related to operators or notified bodies, other than those laid down in Articles 5, shall be subject to administrative fines of up to EUR 15 000 000 or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher: (a) obligations of providers pursuant to Article 16; (b) obligations of authorised representatives pursuant to Article 22; (c) obligations of importers pursuant to Article 23; (d) obligations of distributors pursuant to Article 24; (e) obligations of deployers pursuant to Article 26; (f) requirements and obligations of notified bodies pursuant to Article 31, Article 33(1), (3) and (4) or Article 34; (g) transparency obligations for providers and deployers pursuant to Article 50. 5. The supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request shall be subject to administrative fines of up to EUR 7 500 000 or, if the offender is an undertaking, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. 6. In the case of SMEs, including start-ups, each fine referred to in this Article shall be up to the percentages or amount referred to in paragraphs 3, 4 and 5, whichever thereof is lower. 7. When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and, as appropriate, regard shall be given to the following: (a) the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system, as well as, where appropriate, the number of affected persons and the level of damage suffered by them; (b) whether administrative fines have already been applied by other market surveillance authorities to the same operator for the same infringement; (c) whether administrative fines have already been applied by other authorities to the same operator for infringements of other Union or national law, when such infringements result from the same activity or omission constituting a relevant infringement of this Regulation; (d) the size, the annual turnover and market share of the operator committing the infringement; (e) any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement; (f) the degree of cooperation with the national competent authorities, in order to remedy the infringement and mitigate the possible adverse effects of the infringement; (g) the degree of responsibility of the operator taking into account the technical and organisational measures implemented by it; (h) the manner in which the infringement became known to the national competent authorities, in particular whether, and if so to what extent,the operator notified the infringement; (i) the intentional or negligent character of the infringement; (j) any action taken by the operator to mitigate the harm suffered by the affected persons. 8. Each Member State shall lay down rules on to what extent administrative fines may be imposed on public authorities and bodies established in that Member State. 9. Depending on the legal system of the Member States, the rules on administrative fines may be applied in such a manner that the fines are imposed by competent national courts or by other bodies, as applicable in those Member States. The application of such rules in those Member States shall have an equivalent effect. 10. The exercise of powers under this Article shall be subject to appropriate procedural safeguards in accordance with Union and national law, including effective judicial remedies and due process. 11. Member States shall, on an annual basis, report to the Commission about the administrative fines they have issued during that year, in accordance with this Article, and about any related litigation or judicial proceedings. |
第九十九条 救济 1、根据本法规定的条款和条件,各成员国应当就运营者违反本法规定的行为制定相应的处罚规则和其他强制措施(其中可以包括警告和非财产类处罚措施),并且,考虑到欧盟委员会根据本法第九十六条发布的指导方针,还应当采取一切必要措施确保其得到适当有效的执行。规定的处罚应当有效、适度且具有劝诫作用。应当考虑到中小企业(包括初创企业)的利益及其经济上的可行性。 2、成员国应当尽快(不晚于生效之日前)将本条第1款项下处罚规则和其他强制措施通知欧盟委员会,该等规则发生任何修订时,也应当立即通知欧盟委员会。 3、未遵守本法第五条项下禁止性人工智能活动规定的,应当处以最高3500万欧元的行政罚款,如果企业违规,最高可处以其上一财政年度全球年营业额7%的行政罚款(以较高者为准)。 4、除本法第五条所规定内容外,违反与运营者或评定机构有关的如下任何规定,应当处以最高1500万欧元的行政罚款,如果企业违规,最高可处以其上一财政年度全球年营业额3%的行政罚款(以较高者为准): (a)第十六条规定的提供者义务; (b)第二十二条规定的授权代表的义务; (c)第二十三条规定的进口方的义务; (d)第二十四条规定的经销方的义务; (e)第二十六条规定的部署者的义务; (f)第三十一条、第三十三条第1款、第3款和第4款或第三十四条规定的评定机构的要求和义务; (g)第五十条规定的提供者和部署者的透明度义务。 5、答复(监管机构的)要求时向评定机构或国家主管机关提供的信息不正确、不完整或误导性的,应当处以最高750万欧元的行政罚款,如果企业违反上述规定,则处以其上一财政年度全球年营业额总额的1%的行政罚款,以较高者为准。 6、对于中小企业(包括初创企业),根据本条规定作出的每笔罚款不应超过本条第3款、第4款和第5款所规定的百分比或金额,且应以较低者为准。 7、在决定是否处以行政罚款和行政罚款金额时,应当考虑案涉所有相关情况,并酌情考虑以下因素: (a)基于人工智能系统的目的,考虑侵权行为的性质、严重性和持续时间及其后果,并在适当情况下考虑受影响的人数及其所遭受的损害程度; (b)其他市场监管机构是否已对同一经营者因同一侵权行为采取过行政罚款; (c)同一经营者的同一行为或过错既构成本法项下构成侵权也构成其他欧盟或成员国法项下侵权的,其他行政机关是否已经对同一经营者违反其他欧盟或成员国法律的行为采取过行政罚款; (d)侵权行为人的规模、年营业额和市场份额; (e)案情相关的任何其他加重或减轻因素,例如直接或间接从侵权中获得的经济利益或避免的损失; (f)为补救侵权行为并减轻侵权行为可能产生的不利影响,与国家主管机关的配合度; (g)基于运营者采取的技术和组织措施,考虑运营者的责任大小; (h)国家主管机关获悉侵权行为的方式,特别是经营者是否报告侵权行为及其报告程度; (i)侵权行为的故意或过失性质; (j)运营者为减轻受影响人员所遭受的损害而采取的任何行动。 8、各成员国应当制定规则,规定何种情况下可以对该成员国设立的公权力机关处以行政罚款。 9、根据成员国法律体系的不同,成员国适用行政罚款规则的方式可能是:由该等成员国有管辖权的法院或其他行政机关执行罚款。该等规则在上述成员国适用时应当具有同等效力。 10.根据本条行使权力应当受到欧盟和成员国法律规定的适当程序保障,包括有效的司法救济和正当程序。 11、成员国应当每年向欧盟委员会报告其当年度根据本条规定采取的行政罚款和任何相关诉讼或司法程序情况。 |
Article 100 Administrative fines on Union institutions, bodies, offices and agencies 1. The European Data Protection Supervisor may impose administrative fines on Union institutions, bodies, offices and agencies falling within the scope of this Regulation. When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and due regard shall be given to the following: (a)the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system concerned, as well as, where appropriate, the number of affected persons and the level of damage suffered by them; (b)the degree of responsibility of the Union institution, body, office or agency, taking into account technical and organisational measures implemented by them; (c) any action taken by the Union institution, body, office or agency to mitigate the damage suffered by affected persons; (d)the degree of cooperation with the European Data Protection Supervisor in order to remedy the infringement and mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously ordered by the European Data Protection Supervisor against the Union institution, body, office or agency concerned with regard to the same subject matter; (e)any similar previous infringements by the Union institution, body, office or agency; (f) the manner in which the infringement became known to the European Data Protection Supervisor, in particular whether, and if so to what extent, the Union institution, body, office or agency notified the infringement; (g)the annual budget of the Union institution, body, office or agency. 2. Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to EUR 1 500 000. 3. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Article 5, shall be subject to administrative fines of up to EUR 750 000. 4. Before taking decisions pursuant to this Article, the European Data Protection Supervisor shall give the Union institution, body, office or agency which is the subject of the proceedings conducted by the European Data Protection Supervisor the opportunity of being heard on the matter regarding the possible infringement. The European Data Protection Supervisor shall base his or her decisions only on elements and circumstances on which the parties concerned have been able to comment. Complainants, if any, shall be associated closely with the proceedings. 5. The rights of defence of the parties concerned shall be fully respected in the proceedings. They shall be entitled to have access to the European Data Protection Supervisor’s file, subject to the legitimate interest of individuals or undertakings in the protection of their personal data or business secrets. 6. Funds collected by imposition of fines in this Article shall contribute to the general budget of the Union. The fines shall not affect the effective operation of the Union institution, body, office or agency fined. 7. The European Data Protection Supervisor shall, on an annual basis, notify the Commission of the administrative fines it has imposed pursuant to this Article and of any litigation or judicial proceedings it has initiated. |
第一百条 对欧盟各机构的行政罚款 1、欧洲数据保护监督员可以对本法范围内的欧盟各机构处以行政罚款。在决定是否处以行政罚款和行政罚款金额时,应当考虑案涉所有相关情况,并适当考虑以下因素: (a)基于相关人工智能系统的目的,考虑侵权的性质、严重性和持续时间及其后果,并在适当情况下考虑受影响的人数及其所遭受的损害程度; (b)基于欧盟各机构采取的技术和组织措施,考虑其的责任大小; (c)欧盟各机构为减轻受影响人员所遭受的损害而采取的任何行动; (d)为补救侵权行为并减轻侵权行为可能产生的不利影响,与欧洲数据保护监督员配合的程度,包括遵守欧洲数据保护监督机构此前就同一事项向欧盟各机构下令采取的任何措施; (e)欧盟各机构过往的任何类似侵权行为; (f)欧洲数据保护监督员获悉侵权行为的方式,特别是欧盟各机构是否报告侵权行为及其报告的程度; (g)欧盟各机构的年度预算。 2、违反第五条项下禁止性人工智能活动的规定,应当处以最高一百五十万欧元行政罚款。 3、除第五条所规定的要求或义务外,人工智能系统不符合本法规定的任何要求或义务的,应当处以最高七十五万欧元的行政罚款。 4、根据本条规定作出决定之前,欧洲数据保护监督员应当给予其所采取程序的当事方欧盟机构就可能的侵权事项发表意见的机会。欧洲数据保护监督员应当仅根据相关各方能够发表意见的要素和情况做出决定。处理过程应当与投诉人(如有)紧密关联。 5、处理过程应当充分尊重有关各方的辩护权。他们有权获取欧洲数据保护监管员的文件,但须符合个人或企业在保护其个人数据或商业秘密方面的合法利益。 6、根据本条规定罚款所筹集的资金应当纳入欧盟总预算。罚款不得影响被处罚的欧盟机构的有效运转。 7、欧洲数据保护监督员应当每年向欧盟委员会报告其根据本条规定采取的行政罚款、提起的任何诉讼或司法程序情况。 |
Article 101 Fines for providers of general-purpose AI models 1. The Commission may impose on providers of general-purpose AI models fines not exceeding 3 % of their annual total worldwide turnover in the preceding financial year or EUR 15 000 000, whichever is higher., when the Commission finds that the provider intentionally or negligently: (a) infringed the relevant provisions of this Regulation; (b) failed to comply with a request for a document or for information pursuant to Article 91, or supplied incorrect, incomplete or misleading information; (c)failed to comply with a measure requested under Article 93; (d) failed to make available to the Commission access to the general-purpose AI model or general-purpose AI model with systemic risk with a view to conducting an evaluation pursuant to Article 92. In fixing the amount of the fine or periodic penalty payment, regard shall be had to the nature, gravity and duration of the infringement, taking due account of the principles of proportionality and appropriateness. The Commission shall also into account commitments made in accordance with Article 93(3) or made in relevant codes of practice in accordance with Article 56. 2. Before adopting the decision pursuant to paragraph 1, the Commission shall communicate its preliminary findings to the provider of the general-purpose AI model and give it an opportunity to be heard. 3. Fines imposed in accordance with this Article shall be effective, proportionate and dissuasive. 4. Information on fines imposed under this Article shall also be communicated to the Board as appropriate. 5. The Court of Justice of the European Union shall have unlimited jurisdiction to review decisions of the Commission fixing a fine under this Article. It may cancel, reduce or increase the fine imposed. 6. The Commission shall adopt implementing acts containing detailed arrangements and procedural safeguards for proceedings in view of the possible adoption of decisions pursuant to paragraph 1 of this Article. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2). |
第一百零一条 对通用人工智能模型提供者罚款 1、欧盟委员会认定通用人工智能模型提供者存在下列故意或过失时,可以对通用人工智能模型提供者处以罚款,罚款金额不超过其上一财政年度全球年总营业额的百分之三或一千五百万欧元,以较高者为准: (a)违反本法的有关规定; (b)未遵守本法第九十一条规定的文档或信息要求,或提供不正确、不完整或误导性信息; (c)未遵守本法第九十三条要求的措施; (d)未就欧盟委员会根据第九十二条规定对通用人工智能模型或具有系统性风险的通用人工智能模型进行评估,向欧盟委员会提供上述模型的访问权限。 根据比例和适当原则,欧盟委员会在确定罚款金额或罚款缴纳期限时,应当考虑侵权行为的性质、严重程度和持续时间。还应当考虑根据第九十三条第3款规定作出的承诺或根据第五十六条规定在相关行业守则中作出的承诺。 2、在根据本条第1款作出决定之前,欧盟委员会应当将其初步调查结果告知通用人工智能模型的提供者,并给予其陈述的机会。 3、根据本条规定处以的罚款应当有效、适度且具有劝诫作用。 4、根据本条规定罚款的信息也应当视情况传达给人工智能委员会。 5、欧盟法院对欧盟委员会根据本条作出的罚款决定具有无限管辖权。有权取消、降低或增加罚款。 6、鉴于可能会根据本条第1款规定作出决定,欧盟委员会应当制定包含(罚款的)详细安排和程序保障的实施细则。该等实施细则应当按照第九十八条第2款规定的审查程序经审查通过。 |
CHAPTER XIII FINAL PROVISIONS |
第十三章 最后条款 |
Article 102 Amendment to Regulation (EC) No 300/2008 In Article 4(3) of Regulation (EC) No 300/2008, the following subparagraph is added: ‘When adopting detailed measures related to technical specifications and procedures for approval and use of security equipment concerning Artificial Intelligence systems within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. (*) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ 2024/1689/oj).’. |
第一百零二条 关于第300/2008号条例的修正案 在第300/2008号条例第4条第(3)款增加以下内容: “在采取涉及欧洲议会和欧盟理事会第2024/1689号条例(*)所规定人工智能系统安全设备的批准和使用技术规范和程序的具体措施时,应当考虑该条例第三章第2节所规定的要求。 (*)2024年6月13日,欧洲议会和欧盟理事会第2024/1689号条例规定了人工智能的统一规则,并修订了第300/2008号、第167/2013号、第168/2013号和第2018/858号条例,以及第2014/90/EU号、第2016/797号和第2020/1828号指令(《人工智能法案》)(OJ L,2024/1689,2024年7月12日,ELI:http://data.europa.eu/eli/reg/2024/1689/oj)。” |
Article 103 Amendment to Regulation (EU) No 167/2013 In Article 17(5) of Regulation (EU) No 167/2013, the following subparagraph is added: ‘When adopting delegated acts pursuant to the first subparagraph concerning artificial intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. (*) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ 2024/1689/oj).’. |
第一百零三条 关于第167/2013号条例的修正案 在第167/2013号条例第17条第(5)款增加以下内容: “人工智能系统作为欧洲议会和欧盟理事会第2024/1689号条例(*)意义上的安全组件时,根据第一项制定关于该等人工智能系统的规章条例,应当考虑该条例第三章第2节中规定的要求。 (*)2024年6月13日,欧洲议会和欧盟理事会第2024/1689号条例规定了人工智能的统一规则,并修订了第300/2008号、第167/2013号、第168/2013号和第2018/858号条例,以及第2014/90/EU号、第2016/797号和第2020/1828号指令(《人工智能法》)(OJ L,2024/1689,2024年7月12日,ELI:http://data.europa.eu/eli/reg/2024/1689/oj)。” |
Article 104 Amendment to Regulation (EU) No 168/2013 In Article 22(5) of Regulation (EU) No 168/2013, the following subparagraph is added: ‘When adopting delegated acts pursuant to the first subparagraph concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. (*) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ 2024/1689/oj).’. |
第一百零四条 关于第168/2013号条例的修正案 在第168/2013号条例第22条第(5)条增加以下内容: “根据欧洲议会和欧盟理事会第2024/1689号条例(*)关于人工智能系统的第一项规定制定规章条例时,应当考虑该条例第三章第2节中规定的要求。 (*)2024年6月13日,欧洲议会和欧盟理事会第2024/1689号条例规定了人工智能的统一规则,并修订了第300/2008号、第167/2013号、第168/2013号和第2018/858号条例,以及第2014/90/EU号、第2016/797号和第2020/1828号指令(《人工智能法》)(OJ L,2024/1689,2024年7月12日,ELI:http://data.europa.eu/eli/reg/2024/1689/oj)。” |
Article 105 Amendment to Directive 2014/90/EU In Article 8 of Directive 2014/90/EU, the following paragraph is added: ‘5. For Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (*), when carrying out its activities pursuant to paragraph 1 and when adopting technical specifications and testing standards in accordance with paragraphs 2 and 3, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation. (*) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ 2024/1689/oj).’. |
第一百零五条 关于第2014/90/EU指令的修正案 在第2014/90/EU指令第8条增加以下内容: “5、对属于欧洲议会和欧盟理事会第2024/1689号条例(*)项下安全部件的人工智能系统,根据本条第1款开展活动以及根据本条第2款和第3款采用技术规范和测试标准时,欧盟委员会应当考虑该条例第三章第2节中规定的要求。 (*)2024年6月13日,欧洲议会和欧盟理事会第2024/1689号条例规定了人工智能的统一规则,并修订了第300/2008号、第167/2013号、第168/2013号和第2018/858号条例,以及第2014/90/EU号、第2016/797号和第2020/1828号指令(《人工智能法》)(OJ L,2024/1689,2024年7月12日,ELI:http://data.europa.eu/eli/reg/ 2024/1689/oj)。” |
Article 106 Amendment to Directive (EU) 2016/797 In Article 5 of Directive (EU) 2016/797, the following paragraph is added: ‘12. When adopting delegated acts pursuant to paragraph 1 and implementing acts pursuant to paragraph 11 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. (*) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ 2024/1689/oj).’. |
第一百零六条 关于第2016/797号指令的修正案 在第2016/797号指令第5条增加了以下内容: “12、针对构成欧洲议会和欧盟理事会第2024/1689号条例(*)项下安全组件的人工智能系统,根据本条第1款制定规章条例和根据第11款制定实施细则时,应当考虑该条例第三章第2节中规定的要求。 (*)2024年6月13日,欧洲议会和欧盟理事会第2024/1689号条例规定了人工智能的统一规则,并修订了第300/2008号、第167/2013号、第168/2013号和第2018/858号条例,以及第2014/90/EU号、第2016/797号和第2020/1828号指令(《人工智能法》)(OJ L,2024/1689,2024年7月12日,ELI:http://data.europa.eu/eli/reg/2024/1689/oj)。” |
Article 107 Amendment to Regulation (EU) 2018/858 In Article 5 of Regulation (EU) 2018/858 the following paragraph is added: ‘4. When adopting delegated acts pursuant to paragraph 3 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. (*) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ 2024/1689/oj).’. |
第一百零七条 关于第2018/858号条例的修正案 在第2018/858号条例第5条增加以下内容: “4、针对构成欧洲议会和欧盟理事会第2024/1689号条例(*)项下安全组件的人工智能系统,根据本条第3款制定规章条例时,应当考虑该条例第三章第2节中规定的要求。 (*)2024年6月13日,欧洲议会和欧盟理事会第2024/1689号条例规定了人工智能的统一规则,并修订了第300/2008号、第167/2013号、第168/2013号和第2018/858号条例,以及第2014/90/EU号、第2016/797号和第2020/1828号指令(《人工智能法》)(OJ L,2024/1689,2024年7月12日,ELI:http://data.europa.eu/eli/reg/2024/1689/oj)。” |
Article 108 Amendments to Regulation (EU) 2018/1139 Regulation (EU) 2018/1139 is amended as follows: ‘3. Without prejudice to paragraph 2, when adopting implementing acts pursuant to paragraph 1 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. (*) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa. eu/eli/reg/2024/1689/oj).’; (2) in Article 19, the following paragraph is added: ‘4. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account.’; (3) in Article 43, the following paragraph is added: ‘4. When adopting implementing acts pursuant to paragraph 1 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III,Section 2,of that Regulation shall be taken into account.’; (4) in Article 47, the following paragraph is added: ‘3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account.’; (5) in Article 57, the following subparagraph is added: ‘When adopting those implementing acts concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account.’; (6) in Article 58, the following paragraph is added: ‘3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation(EU) 2024/1689, the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account.’. |
第一百零八条 关于第2018/1139号条例的修正案 第2018/1139号条例作如下修订: (1)在第17条中增加以下内容: “3、在不影响本条第2款规定的情况下,对于构成欧洲议会和欧盟理事会第2024/1689号条例(*)项下安全组件的人工智能系统,根据本条第1款制定实施细则时,应当考虑该条例第三章第2节中规定的要求。 (*)2024年6月13日欧洲议会和理事会第2024/1689号条例(欧盟),规定了人工智能的统一规则,并修订了第300/2008号、第167/2013号、第168/2013号和第2018/858号条例,以及第2014/90/EU号、第2016/797号和第2020/1828号指令(《人工智能法》)(OJ L,2024/1689,2024年7月12日,ELI:http://data.europa.eu/eli/reg/2024/1689/oj)。” (2)在第19条中增加以下内容: “4、对于构成欧洲议会和欧盟理事会第2024/1689号条例项下安全组件的人工智能系统,根据本条第1款和第2款制定规章条例时,应当考虑该条例第三章第2节中规定的要求。” (3)在第43条中增加以下内容: “4、对于构成欧洲议会和欧盟理事会第2024/1689号条例项下安全组件的人工智能系统,根据本条第1款规定制定实施细则时,应当考虑该条例第三章第2节规定的要求。” (4)在第47条中增加以下内容: “3、对于构成欧洲议会和欧盟理事会第2024/1689号条例项下安全组件的人工智能系统,根据本条第1款和第2款制定实施细则时,应当考虑该条例第三章第2节中规定的要求” (5)在第57条中增加以下内容: “对于构成欧洲议会和欧盟理事会第2024/1689号条例项下安全组件的人工智能系统,制定相关实施细则时,应当考虑该条例第三章第2节中规定的要求。” (6)在第58条中增加以下段落: “3、对于构成欧洲议会和欧盟理事会第2024/1689号条例项下安全组件的人工智能系统,根据本条第1款和第2款制定规章条例时,应当考虑该条例第三章第2节中规定的要求。” |
Article 109 Amendment to Regulation (EU) 2019/2144 In Article 11 of Regulation (EU) 2019/2144, the following paragraph is added: ‘3. When adopting the implementing acts pursuant to paragraph 2, concerning artificial intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (*), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. (*) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ 2024/1689/oj).’. |
第一百零九条 关于第2019/2144号条例的修正案 在第2019/2144号条例第11条中增加了以下内容: “3. 对于构成欧洲议会和欧盟理事会第2024/1689号条例(*)项下安全组件的人工智能系统,根据本条第2款制定实施细则时,应当考虑该条例第三章第2节中规定的要求。 (*)2024年6月13日,欧洲议会和欧盟理事会第2024/1689号条例规定了人工智能的统一规则,并修订了第300/2008号、第167/2013号、第168/2013号和第2018/858号条例,以及第2014/90/EU号、第2016/797号和第2020/1828号指令(《人工智能法》)(OJ L,2024/1689,2024年7月12日,ELI:http://data.europa.eu/eli/reg/2024/1689/oj)。” |
Article 110 Amendment to Directive (EU) 2020/1828 In Annex I to Directive (EU) 2020/1828 of the European Parliament and of the Council (58), the following point is added: ‘(68) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/2024/1689/oj).’. |
第一百一十条 关于第2020/1828号指令的修正案 在欧洲议会和欧盟理事会第2020/1828号指令(58)的附件一中增加以下内容: “(68)2024年6月13日,欧洲议会和欧盟理事会制定第2024/1689号条例规定了人工智能的统一规则,并修订了第300/2008号、第167/2013号、第168/2013号和第2018/858号、第2018/1139号和第2019/2144号条例以及第2014/90/EU号、第2016/797号和第2020/1828号指令(《人工智能法》)(OJ L,2024/1689,2024年7月12日,ELI:http://data.europa.eu/eli/reg/2024/1689/oj)。 |
Article 111 AI systems already placed on the market or put into service and general-purpose AI models already placed on the marked 1. Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a), AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X that have been placed on the market or put into service before 2 August 2027 shall be brought into compliance with this Regulation by 31 December 2030. The requirements laid down in this Regulation shall be taken into account in the evaluation of each large-scale IT system established by the legal acts listed in Annex X to be undertaken as provided for in those legal acts and where those legal acts are replaced or amended. 2. Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a), this Regulation shall apply to operators of high-risk AI systems, other than the systems referred to in paragraph 1 of this Article, that have been placed on the market or put into service before 2 August 2026, only if, as from that date, those systems are subject to significant changes in their designs. In any case, the providers and deployers of high-risk AI systems intended to be used by public authorities shall take the necessary steps to comply with the requirements and obligations of this Regulation by 2 August 2030. 3. Providers of general-purpose AI models that have been placed on the market before 2 August 2025 shall take the necessary steps in order to compl with the obligations laid down in this Regulation by 2 August 2027. (58) Directive (EU) 2020/1828 of the European Parliament and of the Council of 25 November 2020 on representative actions for the protection of the collective interests of consumers and repealing Directive 2009/22/EC (OJ L 409, 4.12.2020, p. 1). |
第一百一十一条 已投放到市场或投入使用的人工智能系统和已投放到市场的通用人工智能模型 1、 在不影响本法第一百一十三条第3款(a)项关于第五条的适用方式的情况下,作为根据附录十所列法案建立的大型计算机系统之组成部分的人工智能系统,在2027年8月2日之前被投放到市场或投入使用的,应当在2030年12月31日前符合本法规定。 根据附录十所列法案及其替代规范、修正案对根据该等法案建立的每个大型计算机系统进行评估时,应当考虑本法规定的要求。 2、 在不影响第一百一十三条第3款(a)项关于第五条的适用方式的情况下,于2026年8月2日之前被投放到市场或投入使用的高风险人工智能系统(本条第1款所述系统除外)的设计自该日起发生重大变化的,本法应当适用的该等高风险人工智能系统的运营方。但公权力机关使用的高风险人工智能系统的提供者和部署者应当在2030年8月2日前采取必要措施,使之符合本法规定的要求和义务。 3、 在2025年8月2日之前被投放到市场的通用人工智能模型的通着应当采取必要措施,在2027年8月2日之前履行本法规定的义务。 【注释】(58)2020年11月25日,欧洲议会和欧盟理事会发布《关于保护消费者集体利益的代表性活动及废除第2009/22/EC号指令》的第2020/1828号指令(OJ L 4092020年12月4日,第1页)。 |
Article 112 Evaluation and review 1. The Commission shall assess the need for amendment of the list set out in Annex III and of the list of prohibited AI practices laid down in Article 5, once a year following the entry into force of this Regulation, and until the end of the period of the delegation of power laid down in Article 97. The Commission shall submit the findings of that assessment to the European Parliament and the Council. 2. By 2 August 2028 and every four years thereafter, the Commission shall evaluate and report to the European Parliament and to the Council on the following: (a) the need for amendments extending existing area headings or adding new area headings in Annex III; (b) amendments to the list of AI systems requiring additional transparency measures in Article 50; (c)amendments enhancing the effectiveness of the supervision and governance system. 3. By 2 August 2029 and every four years thereafter, the Commission shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The report shall include an assessment with regard to the structure of enforcement and the possible need for a Union agency to resolve any identified shortcomings. On the basis of the findings, that report shall, where appropriate, be accompanied by a proposal for amendment of this Regulation. The reports shall be made public. 4. The reports referred to in paragraph 2 shall pay specific attention to the following: (a) the status of the financial, technical and human resources of the national competent authorities in order to effectively perform the tasks assigned to them under this Regulation; (b) the state of penalties, in particular administrative fines as referred to in Article 99(1), applied by Member States for infringements of this Regulation; (c) adopted harmonised standards and common specifications developed to support this Regulation; (d) the number of undertakings that enter the market after the entry into application of this Regulation, and how many of them are SMEs. 5. By 2 August 2028, the Commission shall evaluate the functioning of the AI Office, whether the AI Office has been given sufficient powers and competences to fulfil its tasks, and whether it would be relevant and needed for the proper implementation and enforcement of this Regulation to upgrade the AI Office and its enforcement competences and to increase its resources. The Commission shall submit a report on its evaluation to the European Parliament and to the Council. 6. By 2 August 2028 and every four years thereafter, the Commission shall submit a report on the review of the progress on the development of standardisation deliverables on the energy-efficient development of general-purpose AI models, and asses the need for further measures or actions, including binding measures or actions. The report shall be submitted to the European Parliament and to the Council, and it shall be made public. 7. By 2 August 2028 and every three years thereafter, the Commission shall evaluate the impact and effectiveness of voluntary codes of conduct to foster the application of the requirements set out in Chapter III, Section 2 for AI systems other than high-risk AI systems and possibly other additional requirements for AI systems other than high-risk AI systems, including as regards environmental sustainability. 8. For the purposes of paragraphs 1 to 7, the Board, the Member States and national competent authorities shall provide the Commission with information upon its request and without undue delay. 9. In carrying out the evaluations and reviews referred to in paragraphs 1 to 7, the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of other relevant bodies or sources. 10. The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account developments in technology, the effect of AI systems on health and safety, and on fundamental rights, and in light of the state of progress in the information society. 11. To guide the evaluations and reviews referred to in paragraphs 1 to 7 of this Article, the AI Office shall undertake to develop an objective and participative methodology for the evaluation of risk levels based on the criteria outlined in the relevant Articles and the inclusion of new systems in: (a) the list set out in Annex III, including the extension of existing area headings or the addition of new area headings in that Annex; (b) the list of prohibited practices set out in Article 5; and (c) the list of AI systems requiring additional transparency measures pursuant to Article 50. 12. Any amendment to this Regulation pursuant to paragraph 10, or relevant delegated or implementing acts, which concerns sectoral Union harmonisation legislation listed in Section B of Annex I shall take into account the regulatory specificities of each sector, and the existing governance, conformity assessment and enforcement mechanisms and authorities established therein. 13. By 2 August 2031, the Commission shall carry out an assessment of the enforcement of this Regulation and shall report on it to the European Parliament, the Council and the European Economic and Social Committee, taking into account the first years of application of this Regulation. On the basis of the findings, that report shall, where appropriate, be accompanied by a proposal for amendment of this Regulation with regard to the structure of enforcement and the need for a Union agency to resolve any identified shortcomings. |
第一百一十二条 评估和审查 1、 在本法生效后,欧盟委员会应当每年评估一次是否需要修订本法附录三所列清单和第五条所规定的禁止性人工智能活动清单,直至第九十七条规定的授权期限结束。欧盟委员会应当将评估结果提交给欧洲议会和欧盟理事会。 2、2028年8月2日之前以及此后每四年,欧盟委员会都应当评估并向欧洲议会和欧盟理事会报告以下情况: (a)需要扩展附录三所列的既有领域标题或增加新的领域标题; (b)修订本法第五十条中需要采取额外透明度措施的人工智能系统的清单; (c)增强监督治理体系有效性的修正案。 3、 2029年8月2日之前及此后每四年,欧盟委员会都应当向欧洲议会和欧盟理事会提交一份关于本法的评估和审查的报告。报告应当包括对执法结构和欧盟机构解决已发现的缺陷的可能需求。根据调查结果,该报告应当在适当的情况下附上对本法的修订建议。报告应当公开。 4、 本条第2款所述报告应当特别关注以下方面: (a)为有效执行本法规定的任务,国家主管机关的财政、技术和人力资源状况; (b)成员国对违反本法的行为适用的处罚措施情况,特别是第九十九条第1款所规定的行政罚款; (c)为落实本法而制定的统一标准和通用规范的采用情况; (d)本法实施后进入市场的企业数量,以及其中的中小企业占比。 5、2028年8月2日之前,欧盟委员会应当评估人工智能办公室的运行情况,人工智能办公室是否被赋予足够的权力和能力来完成其任务,是否与正确实施和执行本法以提升人工智能办公室及其执法能力并增加其资源有关,并且是实现该目的所需。欧盟委员会应当向欧洲议会和欧盟理事会提交一份评估报告。 6、 在2028年8月2日之前及此后每四年,欧盟委员会都应当提交一份报告,审查通用人工智能模型节能开发标准化成果的完成进展,并评估是否需要采取进一步措施或行动(包括具有约束力的措施或行动)。上述报告应当提交给欧洲议会和欧盟理事会,并应当公开。 7、 在2028年8月2日之前及此后每三年,欧盟委员会都应当评估自愿型行业守则的影响和有效性,以促进本法第三章第二节中规定的适用于除高风险人工智能系统以外的人工智能系统的要求,以及可能适用于除高危人工智能系统之外的人工智能的其他附加要求,包括环境可持续性方面的要求。 8、为本条第1款至第7款之目的,人工智能委员会、各成员国和成员国主管机关应当根据欧盟委员会的要求,尽快向欧盟委员会提供信息。 9、在进行本条第1款至第7款项下评估和审查时,欧盟委员会应当考虑人工智能委员会、欧洲议会、欧盟理事会和其他相关机构或来源的立场和调查结果。 10、特别是考虑到技术的发展、人工智能系统对健康和安全以及基本权利的影响,并根据信息社会的进步状况,欧盟委员会应当在必要时提交适当的提案以修订本法。 11、为指导本条第1款至第7款项下的评估和审查,人工智能办公室应当承诺根据相关条款中概述的标准,制定一套客观和参与式的风险水平评估方法,并纳入以下范围的新系统: (a)附录三中的清单,包括在该附录中扩展现有的领域标题或添加新的领域标题; (b)本法第五条项下的禁止类行为清单;和 (c)根据第五十条规定,需要采取附加透明度措施的人工智能系统清单。 12、根据本条第10款或相关规章条例或实施细则规定,对本法进行的任何修订涉及附录一第B条所列行业的欧盟统一立法的,应当考虑每个部门的监管特殊性,以及行业已建立的现有治理、合格评定和执法机制和机构。 13、考虑到是本法实施的最初几年,欧盟委员会应当在2031年8月2日之前对本法的执行情况进行评估,并向欧洲议会、欧盟理事会、欧洲经济和社会委员会报告。根据调查结果,欧盟委员会应当视情况随报告附上一份关于本法的执行结构和欧盟机构解决已发现缺陷的需求的修正案提案。 |
Article 113 Entry into force and application This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union. It shall apply from 2 August 2026. (a) Chapters I and II shall apply from 2 February 2025; (b) Chapter III Section 4, Chapter V, Chapter VII and Chapter XII and Article 78 shall apply from 2 August 2025, with the exception of Article 101; (c) Article 6(1) and the corresponding obligations in this Regulation shall apply from 2 August 2027.This Regulation shall be binding in its entirety and directly applicable in all Member States. |
第一百一十三条 生效和施行 本法自正式版全文在《欧盟官方公报》上公布之日起第二十日开始生效。 本法自2026年8月2日起开始施行,但下列条款除外: (a) 本法第一章和第二章自2025年2月2日起施行; (b) 本法第三章第四节、第五章、第七章和第十二章(除第一百零一条外)以及第七十八条自2025年8月2日起施行; (c) 本法第六条第1款及其相关条款规定的相应义务应当自2027年8月2日起施行。 本法应当具有全面约束力并直接适用于所有成员国。 |
Done at Brussels, 13 June 2024.
For the European Parliament The President R. METSOLA
For the Council The President M. MICHEL |
2024年6月13日发布于布鲁塞尔。 欧洲议会 主席 梅特索拉(Roberta Metsola)
欧盟理事会 主席 夏尔·米歇尔(Charles Michel) |
(后接法案十三份附录)