本期内容为欧盟《人工智能法案》中英对照版,法案英文全文9万余字、中文译文约6万字,囿于公号篇幅限制,分四期发出。本期为第34-71条。
Article 34 Operational obligations of notified bodies 1. Notified bodies shall verify the conformity of high-risk AI systems in accordance with the conformity assessment procedures set out in Article 43. 2. Notified bodies shall avoid unnecessary burdens for providers when performing their activities, and take due account of the size of the provider, the sector in which it operates, its structure and the degree of complexity of the high-risk AI system concerned, in particular in view of minimising administrative burdens and compliance costs for micro- and small enterprises within the meaning of Recommendation 2003/361/EC. The notified body shall, nevertheless, respect the degree of rigour and the level of protection required for the compliance of the high-risk AI system with the requirements of this Regulation. 3. Notified bodies shall make available and submit upon request all relevant documentation, including the providers’ documentation, to the notifying authority referred to in Article 28 to allow that authority to conduct its assessment, designation, notification and monitoring activities, and to facilitate the assessment outlined in this Section. |
第三十四条 评定机构的运营义务 1、评定机构应当根据本法第四十三条规定的符合性评定程序验证高风险人工智能系统的符合性。 2、评定机构在开展活动时应当避免给提供者带来不必要的负担,并应适当考虑提供者的规模、经营所处行业、其结构和相关高风险人工智能系统的复杂性,尤其是应考虑到尽量减少欧洲央行《企业规模和委员会建议2003/361/EC》所覆盖范围内的小微企业的行政负担和合规成本。但是,评定机构应当遵守本法要求高风险人工智能系统所需达到的严格程度和保护水平。 3、评定机构应当按要求向本法第二十八条所规定的通报机构公开并提交所有相关文件(包括提供者的文档),以便该通报机构开展评估、指定、通报和监测活动,并推动本节所规定的评估工作。 |
Article 35 Identification numbers and lists of notified bodies 1. The Commission shall assign a single identification number to each notified body, even where a body is notified under more than one Union act. 2. The Commission shall make publicly available the list of the bodies notified under this Regulation, including their identification numbers and the activities for which they have been notified. The Commission shall ensure that the list is kept up to date. |
第三十五条 评定机构的识别号和名单 1、欧盟委员会应当为每个评定机构分配一个识别号,包括根据多部欧盟法律通报的机构。 2、欧盟委员会应当公布根据本法规定通报的机构名单,包含其识别号和通报所载的活动范围。欧盟委员会应当确保持续更新名单。 |
Article 36 Changes to notifications 1. The notifying authority shall notify the Commission and the other Member States of any relevant changes to the notification of a notified body via the electronic notification tool referred to in Article 30(2). 2. The procedures laid down in Articles 29 and 30 shall apply to extensions of the scope of the notification. For changes to the notification other than extensions of its scope, the procedures laid down in paragraphs (3) to (9) shall apply. 3. Where a notified body decides to cease its conformity assessment activities, it shall inform the notifying authority and the providers concerned as soon as possible and, in the case of a planned cessation, at least one year before ceasing its activities. The certificates of the notified body may remain valid for a period of nine months after cessation of the notified body’s activities, on condition that another notified body has confirmed in writing that it will assume responsibilities for the high-risk AI systems covered by those certificates. The latter notified body shall complete a full assessment of the high-risk AI systems affected by the end of that nine-month-period before issuing new certificates for those systems. Where the notified body has ceased its activity, the notifying authority shall withdraw the designation. 4. Where a notifying authority has sufficient reason to consider that a notified body no longer meets the requirements laid down in Article 31, or that it is failing to fulfil its obligations, the notifying authority shall without delay investigate the matter with the utmost diligence. In that context, it shall inform the notified body concerned about the objections raised and give it the possibility to make its views known. If the notifying authority comes to the conclusion that the notified body no longer meets the requirements laid down in Article 31 or that it is failing to fulfil its obligations, it shall restrict, suspend or withdraw the designation as appropriate, depending on the seriousness of the failure to meet those requirements or fulfil those obligations. It shall immediately inform the Commission and the other Member States accordingly. 5. Where its designation has been suspended, restricted, or fully or partially withdrawn, the notified body shall inform the providers concerned within 10 days. 6. In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall take appropriate steps to ensure that the files of the notified body concerned are kept, and to make them available to notifying authorities in other Member States and to market surveillance authorities at their request. 7. In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall: (a)assess the impact on the certificates issued by the notified body; (b) submit a report on its findings to the Commission and the other Member States within three months of having notified the changes to the designation; (c)require the notified body to suspend or withdraw, within a reasonable period of time determined by the authority, any certificates which were unduly issued, in order to ensure the continuing conformity of high-risk AI systems on the market; (d)inform the Commission and the Member States about certificates the suspension or withdrawal of which it has required; (e) provide the national competent authorities of the Member State in which the provider has its registered place of business with all relevant information about the certificates of which it has required the suspension or withdrawal; that authority shall take the appropriate measures, where necessary, to avoid a potential risk to health, safety or fundamental rights. 8. With the exception of certificates unduly issued, and where a designation has been suspended or restricted, the certificates shall remain valid in one of the following circumstances: (a)the notifying authority has confirmed, within one month of the suspension or restriction, that there is no risk to health, safety or fundamental rights in relation to certificates affected by the suspension or restriction, and the notifying authority has outlined a timeline for actions to remedy the suspension or restriction; or (b)the notifying authority has confirmed that no certificates relevant to the suspension will be issued, amended or re-issued during the course of the suspension or restriction, and states whether the notified body has the capability of continuing to monitor and remain responsible for existing certificates issued for the period of the suspension or restriction; in the event that the notifying authority determines that the notified body does not have the capability to support existing certificates issued, the provider of the system covered by the certificate shall confirm in writing to the national competent authorities of the Member State in which it has its registered place of business, within three months of the suspension or restriction, that another qualified notified body is temporarily assuming the functions of the notified body to monitor and remain responsible for the certificates during the period of suspension or restriction. 9. With the exception of certificates unduly issued, and where a designation has been withdrawn, the certificates shall remain valid for a period of nine months under the following circumstances: (a)the national competent authority of the Member State in which the provider of the high-risk AI system covered by the certificate has its registered place of business has confirmed that there is no risk to health, safety or fundamental rights associated with the high-risk AI systems concerned; and (b)another notified body has confirmed in writing that it will assume immediate responsibility for those AI systems and completes its assessment within 12 months of the withdrawal of the designation. In the circumstances referred to in the first subparagraph, the national competent authority of the Member State in which the provider of the system covered by the certificate has its place of business may extend the provisional validity of the certificates for additional periods of three months, which shall not exceed 12 months in total. The national competent authority or the notified body assuming the functions of the notified body affected by the change of designation shall immediately inform the Commission, the other Member States and the other notified bodies thereof. |
第三十六条 通报变更 1、通报机构应当通过本法第三十条第2款所规定的电子通报工具,将评定机构通报的任何相关变更通报欧盟委员会和其他成员国。 2、本法第二十九条和第三十条规定的程序适用于扩展通报范围。 除扩大其范围外,通报的变更应当按本条第3款至第9款规定的程序进行。 3、评定机构决定停止其符合性评定业务的,应当尽快通知通报机构和有关提供者;如果计划终止其符合性评定业务,应当在终止前至少一年通知上述主体。评定机构已经颁发的证书在其业务停止后的九个月内仍然有效,但有其他评定机构书面确认将对其证书所涵盖的高风险人工智能系统承担责任的除外。上述其他评定机构应当在九个月期限届满前完成对受影响高风险人工智能系统的全面评估,并为该等系统颁发新的证书。评定机构终止符合性评定业务后,通报机构应撤回对该机构的指定。 4、如果通报机构有充足理由认为评定机构不再符合本法第三十一条规定的要求,或者未能履行其义务,通报机构应当立即对该情况展开最大程度调查。通报机构对评定机构的上述意见应当送达相关评定机构,并允许评定机构发表意见。如果通报机构认定评定机构不再符合本法第三十一条规定的要求或未能履行其义务,应当根据评定机构未能符合要求或未履行义务的严重程度,酌情限制、中止或撤回对该评定机构的指定。发生上述情况的,通报机构应当立即通知欧盟委员会和其他成员国。 5、通报机构对评定机构的指定被中止、限制或全部或部分撤回后,评定机构应当在十天内通知相关提供者。 6、评定机构的指定被限制、中止或撤销后,通报机构应当采取适当措施,确保保留评定机构相关的档案,并根据其他成员国的通报机构和市场监管机构的要求向其提供该等档案。 7、限制、中止或撤回对评定机构的指定时,申报机构应当: (a)评估对评定机构所颁发证书的影响; (b)在指定变更通知后三个月内向欧盟委员会和其他成员国提交一份关于其调查结果的报告; (c)要求评定机构在主管机关确定的合理期限内暂停或撤销其不当颁发的证书,以确保市场上高风险人工智能系统的持续合规性; (d)向欧盟委员会和成员国通报其要求暂停或撤销的证书; (e)按提供者注册营业地所在成员国的国家主管机关的要求向其提供暂停或撤销的证书相关的所有信息;上述主管机关应当在必要时采取适当措施,避免引发健康、安全或基本权利方面的潜在风险。 8、除不当签发的证书,以及被暂停或限制指定的证书外,符合下列条件的证书仍然有效: (a)通报机构已经在采取暂停或限制措施后一个月内确认,受暂停或限制影响的证书对健康、安全或基本权利无风险,且通报机构已经列明暂停或限制的补救措施时间安排;或 (b)通报机构已经确认,在暂停或限制过程中,不会颁发、修订或重新颁发与暂停事项有关的证书,并说明评定机构是否有能力继续监督并对暂停或限制期间颁发的既有证书负责;如果通报机构确定评定机构没有能力支持已颁发的现有证书,证书所涵盖的系统的提供者应当在评定机构被暂停或限制后三个月内,以书面形式向其注册营业地所在成员国的国家主管机关确认另一家合格评定机构临时承担评定机构的职责,在原评定机构被暂停或限制期间监测并对证书负责。 9、除不当签发的证书和被撤销指定的证书外,在以下情况下,证书的有效期为九个月: (a)证书所涵盖的高风险人工智能系统的提供者的注册营业地所在成员国的国家主管机关确认,不存在与相关高风险人工智能系统相关的健康、安全或基本权利风险;及 (b)有另一家评定机构已经书面确认,它将立即对该等人工智能系统负责,并在撤销指定后12个月内完成评估。 在本款第一项所述情况下,证书所涵盖的系统的提供者的注册营业地所在成员国的国家主管机关可以将证书的临时有效期再延长三个月,但有效期合计最长不超过12个月。 受指定之变更影响的国家主管机关或承继评定机构职能的评定机构应当立即通知欧盟委员会、其他成员国和其他评定机构。 |
Article 37 Challenge to the competence of notified bodies 1. The Commission shall, where necessary, investigate all cases where there are reasons to doubt the competence of a notified body or the continued fulfilment by a notified body of the requirements laid down in Article 31 and of its applicable responsibilities. 2. The notifying authority shall provide the Commission, on request, with all relevant information relating to the notification or the maintenance of the competence of the notified body concerned. 3. The Commission shall ensure that all sensitive information obtained in the course of its investigations pursuant to this Article is treated confidentially in accordance with Article 78. 4. Where the Commission ascertains that a notified body does not meet or no longer meets the requirements for its notification, it shall inform the notifying Member State accordingly and request it to take the necessary corrective measures, including the suspension or withdrawal of the notification if necessary. Where the Member State fails to take the necessary corrective measures, the Commission may, by means of an implementing act, suspend, restrict or withdraw the designation. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2). |
第三十七条 对评定机构权限的质询 1、欧盟委员会应当在必要时调查所有有理由质疑评定机构权限或评定机构是否持续履行本法第三十一条规定的要求及其应承担责任的案件。 2、通报机关应当根据申请,向欧盟委员会提供与通报或维持相关评定机构的权限有关的所有相关信息。 3、欧盟委员会应当确保在根据本条规定进行调查的过程中,按照本法第七十八条规定对所获得的所有敏感信息保密。 4、如果欧盟委员会认定评定机构不符合或不再符合通报的要求,应当相应地通知通报机构所属成员国,并要求其采取必要的纠正措施,包括在必要时暂停或撤回通报。如果上述成员国未采取必要的纠正措施,欧盟委员会可以通过制定实施细则直接暂停、限制或撤回对相应评定机构的指定。本款所称实施细则应当根据本法第九十八条第2款规定的审查程序经审查通过。 |
Article 38 Coordination of notified bodies 1. The Commission shall ensure that, with regard to high-risk AI systems, appropriate coordination and cooperation between notified bodies active in the conformity assessment procedures pursuant to this Regulation are put in place and properly operated in the form of a sectoral group of notified bodies. 2. Each notifying authority shall ensure that the bodies notified by it participate in the work of a group referred to in paragraph 1, directly or through designated representatives. 3. The Commission shall provide for the exchange of knowledge and best practices between notifying authorities. |
第三十八条 评定机构的协调 1、对于高风险人工智能系统,欧盟委员会应当确保按本法规定积极参与符合性评定程序的各评定机构之间适当地协调合作,并以评定机构行业小组的形式妥善运作。 2、每个通报机构都应确保其通报的机构直接或通过指定代表参与本条第1款所规定行业小组的工作。 3、欧盟委员会应当支持通报机构之间互相交流知识和最佳实践。 |
Article 39 Conformity assessment bodies of third countries Conformity assessment bodies established under the law of a third country with which the Union has concluded an agreement may be authorised to carry out the activities of notified bodies under this Regulation, provided that they meet the requirements laid down in Article 31 or they ensure an equivalent level of compliance. |
第三十九条 第三国符合性评定机构 根据与欧盟缔约的第三国法律设立的符合性评定机构,在符合本法第三十一条规定的要求或确保具备同等程度合规性的前提下,可以根据本法规定经授权开展评定机构有权开展的活动。 |
SECTION 5 Standards, conformity assessment, certificates, registration |
第五节 标准、符合性评定、证书和登记 |
Article 40 Harmonised standards and standardisation deliverables 1. High-risk AI systems or general-purpose AI models which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, with the obligations set out in of Chapter V, Sections 2 and 3, of this Regulation, to the extent that those standards cover those requirements or obligations. 2. In accordance with Article 10 of Regulation (EU) No 1025/2012, the Commission shall issue, without undue delay, standardisation requests covering all requirements set out in Section 2 of this Chapter and, as applicable, standardisation requests covering obligations set out in Chapter V, Sections 2 and 3, of this Regulation. The standardisation request shall also ask for deliverables on reporting and documentation processes to improve AI systems’ resource performance, such as reducing the high-risk AI system’s consumption of energy and of other resources during its lifecycle, and on the energy-efficient development of general-purpose AI models. When preparing a standardisation request, the Commission shall consult the Board and relevant stakeholders, including the advisory forum. When issuing a standardisation request to European standardisation organisations, the Commission shall specify that standards have to be clear, consistent, including with the standards developed in the various sectors for products covered by the existing Union harmonisation legislation listed in Annex I, and aiming to ensure that high-risk AI systems or general-purpose AI models placed on the market or put into service in the Union meet the relevant requirements or obligations laid down in this Regulation. The Commission shall request the European standardisation organisations to provide evidence of their best efforts to fulfil the objectives referred to in the first and the second subparagraph of this paragraph in accordance with Article 24 of Regulation (EU) No 1025/2012. 3. The participants in the standardisation process shall seek to promote investment and innovation in AI, including through increasing legal certainty, as well as the competitiveness and growth of the Union market, to contribute to strengthening global cooperation on standardisation and taking into account existing international standards in the field of AI that are consistent with Union values, fundamental rights and interests, and to enhance multi-stakeholder governance ensuring a balanced representation of interests and the effective participation of all relevant stakeholders in accordance with Articles 5, 6, and 7 of Regulation (EU) No 1025/2012. |
第四十条 统一标准和标准化成果 1、如果高风险人工智能系统或通用人工智能模型符合其模型引用已按欧洲议会和欧盟理事会第1025/2012号条例规定在《欧洲联盟公报》上公布的统一标准或其任何部分,且上述标准涵盖本章第二节规定的要求或者本法第五章第二节和第三节规定的义务(如适用),该系统或模型应当被推定为符合本章第二节或者本法第五章第二节和第三节规定。 2、根据欧洲议会和欧盟理事会第1025/2012号条例第10条规定,欧盟委员会应当尽快签发涵盖本章第二节项下要求的标准化请求,并发出涵盖本法第五章第二节和第三节项下义务的标准化请求(如适用)。标准化请求还应要求提供报告和信息处理方面的成果,以提高人工智能系统的资源性能。例如,在系统生命周期内减少高风险人工智能系统对能源和其他资源的消耗,以及通用人工智能模型的节能开发。在准备标准化请求时,欧盟委员会应当征询人工智能委员会及咨询论坛等其他利益相关者的意见。 在向欧洲标准化组织发出标准化请求时,欧盟委员会应当明确规定标准必须清晰、一致,包括与附录一中列明的现有欧盟统一立法所涵盖的产品所属各行业制定的标准一致,并旨在确保在欧盟被投放到市场或投入使用的高风险人工智能系统或通用人工智能模型符合本法规定的相关要求或义务。 欧盟委员会应当要求欧洲标准化组织根据欧洲议会和欧盟理事会第1025/2012号条例第24条规定提供证据,证明其已尽最大努力实现本项第一段和第二段中规定的目标。 3、标准化进程的参与者应当根据欧洲议会和欧盟理事会第1025/2012号条例第5条、第6条和第7条规定,努力推动人工智能的投资和创新,包括通过提高法律的确定性以及欧盟市场的竞争力和增速,推动全球标准化合作并考虑符合欧盟价值观、基本权利和利益的人工智能领域现有国际标准,并加强多方参与治理以确保平衡各界代表和所有相关利益相关者的有效参与。 |
Article 41 Common specifications 1. The Commission may adopt, implementing acts establishing common specifications for the requirements set out in Section 2 of this Chapter or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V where the following conditions have been fulfilled: (a) the Commission has requested, pursuant to Article 10(1) of Regulation (EU) No 1025/2012, one or more European standardisation organisations to draft a harmonised standard for the requirements set out in Section 2 of this Chapter, or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V, and: (i) the request has not been accepted by any of the European standardisation organisations; or (ii)the harmonised standards addressing that request are not delivered within the deadline set in accordance with Article 10(1) of Regulation (EU) No 1025/2012; or (iii) the relevant harmonised standards insufficiently address fundamental rights concerns; or (iv) the harmonised standards do not comply with the request; and (b) no reference to harmonised standards covering the requirements referred to in Section 2 of this Chapter or, as applicable, the obligations referred to in Sections 2 and 3 of Chapter V has been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012, and no such reference is expected to be published within a reasonable period. When drafting the common specifications,the Commission shall consult the advisory forum referred to in Article 67. The implementing acts referred to in the first subparagraph of this paragraph shall be adopted in accordance with the examination procedure referred to in Article 98(2). 2. Before preparing a draft implementing act, the Commission shall inform the committee referred to in Article 22 of Regulation (EU) No 1025/2012 that it considers the conditions laid down in paragraph 1 of this Article to be fulfilled. 3. High-risk AI systems or general-purpose AI models which are in conformity with the common specifications referred to in paragraph 1, or parts of those specifications, shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, to comply with the obligations referred to in Sections 2 and 3 of Chapter V, to the extent those common specifications cover those requirements or those obligations. 4. Where a harmonised standard is adopted by a European standardisation organisation and proposed to the Commission for the publication of its reference in the Official Journal of the European Union, the Commission shall assess the harmonised standard in accordance with Regulation (EU) No 1025/2012. When reference to a harmonised standard is published in the Official Journal of the European Union, the Commission shall repeal the implementing acts referred to in paragraph 1, or parts thereof which cover the same requirements set out in Section 2 of this Chapter or, as applicable, the same obligations set out in Sections 2 and 3 of Chapter V. 5. Where providers of high-risk AI systems or general-purpose AI models do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that meet the requirements referred to in Section 2 of this Chapter or, as applicable, comply with the obligations set out in Sections 2 and 3 of Chapter V to a level at least equivalent thereto. 6. Where a Member State considers that a common specification does not entirely meet the requirements set out in Section 2 or, as applicable, comply with obligations set out in Sections 2 and 3 of Chapter V, it shall inform the Commission thereof with a detailed explanation. The Commission shall assess that information and, if appropriate, amend the implementing act establishing the common specification concerned. |
第四十一条 通用规范 1、满足下列条件后,欧盟委员会可以制定实施细则,就本章第二节规定的要求或第五章第二节和第三节规定的义务(如适用)建立通用规范: (a)欧盟委员会已经根据欧洲议会和欧盟理事会第1025/2012号条例第10条第(1)款规定,请求欧洲标准化组织就本章第二节规定的要求或第五章第二节和第三节规定的义务(如适用)起草一份统一标准, (i)该请求尚未被任何欧洲标准化组织接受; (ii)处理该请求的统一标准未在欧洲议会和欧盟理事会第1025/2012号条例第10条第(1)款规定的截止日期内交付; (iii)相关统一标准不足以解决基本权利问题;或 (iv)统一标准不符合要求; (b)根据欧洲议会和欧盟理事会第1025/2012号条例,《欧洲联盟公报》上没有公布涵盖本章第二节项下要求或第五章第二节和第三节项下义务(如适用)的统一标准,且预计在合理期限内也不会公布此类文件。 在起草通用规范时,欧盟委员会应当征询本法第六十七条所规定的咨询论坛的意见。 本款第1项所述实施细则应当按照本法第九十八条第2款规定经审查程序通过。 2、在起草实施细则草案之前,欧盟委员会应当告知第1025/2012号条例第22条项下委员会,其认为本条第1款规定的条件已满足。 3、如果本条第1款所述通用规范或其任何部分涵盖了本章第二节规定的要求和/或本法第五章第二节和第三节所规定的义务(如适用),符合上述通用规范或其相应部分的高风险人工智能系统或通用人工智能模型应当被推定为符合本章第二节规定的要求或本法第五章第二节和第三节所规定的义务(如适用)。 4、如果欧洲标准化组织制定统一标准并建议欧盟委员会在《欧洲联盟公报》上公布,欧盟委员会应当根据第1025/2012号条例规定对统一标准进行评估。统一标准经《欧洲联盟公报》公布后,欧盟委员会应当废除本条第1款项下的实施细则或其中与本章第二节规定相同要求或本法第五章第二节和第三节规定同样义务的部分内容。 5、如果高风险人工智能系统或通用人工智能模型的提供者不符合本条第1款项下的通用规范,其应当适当地证明其已经采用符合本章第二节规定的技术解决方案,或者遵守本法第五章第二和第三节规定的义务(如适用),至少已经达到与之相应的合规水平。 6、如果成员国认为上述通用规范不完全符合本法本章第二节规定的要求或本法第五章第二节和第三节规定的义务,应当通知欧盟委员会并作出详细解释。欧盟委员会应当对该事项进行评估,并在适当的情况下修改建立相关通用规范的实施细则。 |
Article 42 Presumption of conformity with certain requirements 1. High-risk AI systems that have been trained and tested on data reflecting the specific geographical, behavioural, contextual or functional setting within which they are intended to be used shall be presumed to comply with the relevant requirements laid down in Article 10(4). 2. High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 and the references of which have been published in the Official Journal of the European Union shall be presumed to comply with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements. |
第四十二条 符合特定要求的推定 1、经过培训和测试,其数据能够反映其预期被使用的具体地理、行为、场景或功能环境的高风险人工智能系统,应当被推定为符合本法第十条第4款规定的相关要求。 2、已经根据第2019/881号条例规定的网络安全计划获得认证或符合性声明,且已经《欧洲联盟公报》发布的高风险人工智能系统,只要其网络安全证书或符合性声明涵盖本法第十五条规定,即应当被推定为符合本法第十五条规定的网络安全要求。 |
Article 43 Conformity assessment 1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following conformity assessment procedures based on: (a) the internal control referred to in Annex VI; or (b) the assessment of the quality management system and the assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII. In demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2,the provider shall follow the conformity assessment procedure set out in Annex VII where: (a)harmonised standards referred to in Article 40 do not exist, and common specifications referred to in Article 41 are not available; (b) the provider has not applied, or has applied only part of, the harmonised standard; (c) the common specifications referred to in point (a) exist, but the provider has not applied them; (d) one or more of the harmonised standards referred to in point (a) has been published with a restriction, and only on the part of the standard that was restricted. For the purposes of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, where the high-risk AI system is intended to be put into service by law enforcement, immigration or asylum authorities or by Union institutions, bodies, offices or agencies, the market surveillance authority referred to in Article 74(8) or (9), as applicable, shall act as a notified body. 2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. 3. For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply. For the purposes of that assessment, notified bodies which have been notified under those legal acts shall be entitled to control the conformity of the high-risk AI systems with the requirements set out in Section 2, provided that the compliance of those notified bodies with requirements laid down in Article 31(4), (5), (10) and (11) has been assessed in the context of the notification procedure under those legal acts. Where a legal act listed in Section A of Annex I enables the product manufacturer to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may use that option only if it has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering all requirements set out in Section 2 of this Chapter. 4. High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure in the event of a substantial modification, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer. For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification. 5. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annexes VI and VII by updating them in light of technical progress. 6. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend paragraphs 1 and 2 of this Article in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimising the risks to health and safety and protection of fundamental rights posed by such systems, as well as the availability of adequate capacities and resources among notified bodies. |
第四十三条 符合性评定 1、对于附录三第1条所列高风险人工智能系统,提供者适用本法第四十条规定的统一标准或本法第四十条规定的通用规范(如适用)证明高风险人工智能系统符合本章第二节规定的要求时,应当根据情况选择以下任一符合性评定程序: (a)附录六规定的内部控制;或 (b)在附录七规定的评定机构的参与下,对质量管理体系和技术文档进行评估。 在证明高风险人工智能系统符合本章第二节规定的要求时,系统提供者应当遵守附录七规定的符合性评定程序,其中: (a)不存在本法第四十条规定的统一标准,也不存在本法第四十条规定的通用规范; (b)提供者未适用或仅适用了部分统一标准; (c)虽然存在本项(a)段中提到的通用规范,但提供者尚未适用; (d)虽然本项(a)段规定的一项或多项统一标准已经被发布但受限制,并且仅部分标准受限制。 为实现附录七所规定的符合性评定程序目的,提供者可以选择任何评定机构。但是,如果高风险人工智能系统拟由执法、移民或庇护机关或欧盟各机构投入使用,本法第七十四条第8款或第9款所规定的市场监管机构(如适用)应当作为评定机构。 2、对于附录三第2条至第8条所述高风险人工智能系统,提供者应当遵守附录六所述的以内部控制为基础的符合性评定程序,该程序未要求评定机构的参与。 3、对于附录一第A条所述欧盟统一立法涵盖的高风险人工智能系统,提供者应当遵守该等法案规定的相关符合性评定程序。本章第二节规定的要求应当适用于该等高风险人工智能系统,并应当作为评定的一部分。附录七第4.3条、第4.4条、第4.5条和第4.6条第五款也应当适用。 为评估目的,如果在上述法案规定的通报程序中已经对评定机构是否符合本法第三十一条第4款、第5款、第10款和第11款规定的要求进行了评估,根据上述法案通报的评定机构有权控制高风险人工智能系统是否符合本章第二节规定。 如果产品制造商适用了涵盖所有相关要求的所有统一标准,其可以根据附录一第A条所列规范选择拒绝第三方符合性评定。但该制造商只有适用了涵盖本章第二节所规定全部要求的统一标准或本法第四十一条所规定的通用规范(如适用)后,才能享有该项选择权。 4、已完成符合性评定程序的高风险人工智能系统发生重大修改时,无论修改后的系统计划用于进一步分销还是继续由当前部署者使用,均应当重新履行符合性评定程序。 对于被投放到市场或投入使用后仍在学习的高风险人工智能系统,提供者在首次符合性评定时已经预先确定的、本法附录四第2条(f)款项下技术文档中所包含信息中记载的对该高风险人工智能系统及其性能的修改不构成实质性修改。 5、为伴随技术进步对本法附录六和附录七进行更新达到修订目的,欧盟委员会有权根据本法第九十七条规定制定规章条例。 6、欧盟委员会有权根据本法第九十七条规定制定规章条例以修改本条第1款和第2款,使本法附录三第2条至第8条所述高风险人工智能系统接受附录七或其任何部分规定的符合性评定程序。欧盟委员会在制定该等规章条例时,应当考虑以本法附录六所述内部控制为基础的符合性评定程序在预防或尽量减少该等系统对健康和安全的风险、在基本权利保护方面的有效性,以及评定机构是否有足够的能力和资源。 |
Article 44 Certificates 1. Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a language which can be easily understood by the relevant authorities in the Member State in which the notified body is established. 2. Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid. 3. Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision. An appeal procedure against decisions of the notified bodies, including on conformity certificates issued, shall be available. |
第四十四条 证书 1、评定机构根据本法附录七颁发的证书应当以评定机构所在成员国有关当局易于理解的语言书就。 2、证书在其载明的有效期内有效,本法附录一所载人工智能系统的证书的有效期不得超过五年,附录三所载人工智能系统的证书的有效期不得超过四年。根据提供者的要求,在按照所适用的符合性评定程序重新评定后,可以延长证书的有效期,附录一所载人工智能系统每次可延长不超过五年、附录三所载人工智能系统每次可延长不超过四年。证书附页在证书有效时均应持续有效。 3、如果评定机构发现人工智能系统不再符合本章第二节规定,应当按照比例原则暂停或撤销已颁发的证书或对其施加限制,但系统提供者在评定机构规定的合理期限内采取适当的纠正措施确保系统符合上述规定的除外。评定机构应当说明其作出决定的理由。 对于评定机构的决定(包括其颁发合格证书),应当可以申诉。 |
Article 45 Information obligations of notified bodies 1. Notified bodies shall inform the notifying authority of the following: (a)any Union technical documentation assessment certificates, any supplements to those certificates, and any quality management system approvals issued in accordance with the requirements of Annex VII; (b)any refusal, restriction, suspension or withdrawal of a Union technical documentation assessment certificate or a quality management system approval issued in accordance with the requirements of Annex VII; (c)any circumstances affecting the scope of or conditions for notification; (d)any request for information which they have received from market surveillance authorities regarding conformity assessment activities; (e)on request, conformity assessment activities performed within the scope of their notification and any other activity performed, including cross-border activities and subcontracting. 2. Each notified body shall inform the other notified bodies of: (a)quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has issued; (b)Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued. 3. Each notified body shall provide the other notified bodies carrying out similar conformity assessment activities covering the same types of AI systems with relevant information on issues relating to negative and, on request, positive conformity assessment results. 4. Notified bodies shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78. |
第四十五条 评定机构的信息义务 1.评定机构应当向通报机构通报以下事项: (a)根据本法附录七的要求颁发的所有联盟技术文档评估证书、该等证书的附页以及质量管理体系批文; (b)拒绝、限制、暂停或撤销按本法附录七规定颁发的欧盟技术文档评估证书或质量管理体系批文; (c)影响通报范围或条件的任何情况; (d)从市场监管机构收到的任何关于符合性评定活动的信息要求; (e)按要求在其通报范围内开展的符合性评定活动以及任何其他活动(包括跨境活动和分包)。 2、每个评定机构都应当将下列事项通知其他评定机构: (a)被拒绝、暂停或撤销的质量管理体系批文,以及按要求签发的质量体系批文; (b)被拒绝、撤回、暂停或以其他方式限制的联盟技术文档评估证书或其附页,以及按要求颁发的证书和/或其附页。 3、每个评定机构都应当向针对同类人工智能系统开展类似符合性评定活动的其他评定机构提供与否定性评定结果相关问题有关的信息,并应当按要求提供与肯定性评定结果相关问题有关的信息。 4、根据本法第七十八条规定,评定机构应当对其获悉的信息保密。 |
Article 46 Derogation from conformity assessment procedure 1. By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay. 2. In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded. 3. The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraphs 1 and 2. This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities. 4. Where, within 15 calendar days of receipt of the information referred to in paragraph 3, no objection has been raised by either a Member State or the Commission in respect of an authorisation issued by a market surveillance authority of a Member State in accordance with paragraph 1, that authorisation shall be deemed justified. 5. Where, within 15 calendar days of receipt of the notification referred to in paragraph 3, objections are raised by a Member State against an authorisation issued by a market surveillance authority of another Member State, or where the Commission considers the authorisation to be contrary to Union law, or the conclusion of the Member States regarding the compliance of the system as referred to in paragraph 3 to be unfounded, the Commission shall, without delay, enter into consultations with the relevant Member State. The operators concerned shall be consulted and have the possibility to present their views. Having regard thereto,the Commission shall decide whether the authorisation is justified. The Commission shall address its decision to the Member State concerned and to the relevant operators. 6. Where the Commission considers the authorisation unjustified, it shall be withdrawn by the market surveillance authority of the Member State concerned. 7. For high-risk AI systems related to products covered by Union harmonisation legislation listed in Section A of Annex I, only the derogations from the conformity assessment established in that Union harmonisation legislation shall apply. |
第四十六条 符合性评定程序的限制 1、通过对本法第四十三条作出限制并根据合理的请求,市场监管机构可以为公共安全或保护人的生命和健康、保护环境或保护关键行业和基础设施财产这类特殊原因,授权在相关成员国境内上市或投入使用特定的高风险人工智能系统。考虑到上述限制的特殊原因,在履行必要的符合性评定程序后,上述授权应当仅在限定的期间内有效。上述程序应当尽快完成。 2、在以公共安全作为特殊原因而产生合理理由的紧急情况下,或者在人的生命安全或身体健康受到具体、实质性且紧迫的威胁时,执法机关或民防机构可以在未取得本条第1款所规定授权的情况下投入使用特定的高风险人工智能系统,但其应当在使用期间或之后尽快申请上述授权。如果其申请本条第1款项下授权却被拒绝,应当立即停止使用高风险人工智能系统,并立即丢弃使用结果和输出。 3、仅当市场监管机构认为高风险人工智能系统符合本章第二节的要求时,才应当签发本条第1款所述的授权。市场监管机构应当将其根据本条第1款和第2款签发的授权通知欧盟委员会和其他成员国。但其应当通知的内容不包括与执法机关的活动有关的敏感业务数据。 4、如果在收到本条第3款项下信息后的15日内,成员国或欧盟委员会均未对市场监管机构根据本条第1款规定签发的授权提出异议,则该授权应被视为合理。 5、如果在收到本条第3款规定的通知后15日内,某成员国对上述市场监管机构(位于另一成员国)颁发的授权提出异议,或者欧盟委员会认为该授权违反欧盟法,或者上述成员国针对本条第3款所述系统的合规性结论没有根据,欧盟委员会应当尽快与相关成员国协商。并应当询问相关运营方,且被询问的运营方可能会提出意见。基于上述考虑,欧盟委员会应当决定授权是否合理。欧盟委员会应当将其决定告知相关成员国和相关运营方。 6、如果欧盟委员会认定授权不合理,(签发授权的)相关成员国的市场监管机构应当撤回授权。 7、对于与本法附录一第A条所列欧盟统一立法涵盖的产品有关的高风险人工智能系统,仅适用该等欧盟统一立法中规定的符合评定限制。 |
Article47 EU declaration of conformity 1. The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. 2. The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available. 3. Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates. 4. By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate. 5. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex V by updating the content of the EU declaration of conformity set out in that Annex, in order to introduce elements that become necessary in light of technical progress. |
第四十七条 欧盟符合性声明 1、提供者应当为每个高风险人工智能系统起草一份机器可读的、有物理或电子签名的欧盟符合性书面声明,并交由国家主管机关在该高风险人工智能系统被投放到市场或投入使用后10年内处置。欧盟符合性声明应当明确其对应的高风险人工智能系统。应当按要求向相关国家主管机关提交欧盟符合性声明副本。 2、欧盟符合性声明应当说明相关高风险人工智能系统符合本章第二节规定的要求。欧盟符合性声明应当包含本法附录五所列信息,并应翻译成高风险人工智能系统上市或可用的成员国国家主管当局易于理解的语言。 3、如果高风险人工智能系统受其他欧盟统一立法约束,且该等立法也要求配备欧盟符合性声明,则(提供者)应当就该等高风险人工智能系统所适用的所有欧盟法起草一份单独的欧盟符合性声明。声明应当包含识别与该声明有关的欧盟统一立法所需的全部信息。 4、通过起草欧盟符合性声明,提供者应当承担遵守本章第二节规定的责任。提供者应当视情况更新其欧盟符合性声明。 5、为更新本法附录五所述上述欧盟符合性声明内容以修订本法附录五,欧盟委员会有权根据本法第九十七条制定规章条例,增加因技术进步而有必要引入的要素。 |
Article 48 CE marking 1. The CE marking shall be subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008. 2. For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can easily be accessed via the 3. The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate. 4. Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking. 5. Where high-risk AI systems are subject to other Union law which also provides for the affixing of the CE marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other law. |
第四十八条 CE标识 1、CE标识应当符合第765/2008号条例第30条规定的一般原则。 2、对于以数字化方式提供的高风险人工智能系统,只有在可以通过访问该系统的接口或通过易于获取的机器可读代码或其他电子方式轻松使用该系统时,才应当使用数字化CE标识。 3、对于高风险人工智能系统,CE标识应当明显、清晰、不可去除。如果因高风险人工智能系统的性质而无法达到或不能保证达到上述标准,则应当视情况将其加贴在系统的包装或随附文件上。 4、CE标识后面应当带有负责本法第四十三条项下符合性评定程序的评定机构的识别号(如适用)。评定机构的识别号应当由该机构自行或根据其指示由提供者或提供者的授权代表标注。如果在任何宣传材料中提到高风险人工智能系统符合CE标识的要求,均应同时注明(评定机构的)识别号。 5、如果高风险人工智能系统受其他欧盟法约束,且该等法律也要求加贴CE标识,CE标识应当注明该高风险人工智能系统也符合上述欧盟法的要求。 |
Article 49 Registration 1. Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 71. 2. Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71. 3. Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III, deployers that are public authorities, Union institutions, bodies, offices or agencies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 71. 4. For high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement, migration, asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this Article shall be in a secure non-public section of the EU database referred to in Article 71 and shall include only the following information, as applicable, referred to in: (a) Section A, points 1 to 10, of Annex VIII, with the exception of points 6, 8 and 9; (b) Section B, points 1 to 5, and points 8 and 9 of Annex VIII; (c) Section C, points 1 to 3, of Annex VIII; (d) points 1, 2, 3 and 5, of Annex IX. Only the Commission and national authorities referred to in Article 74(8) shall have access to the respective restricted sections of the EU database listed in the first subparagraph of this paragraph. 5. High-risk AI systems referred to in point 2 of Annex III shall be registered at national level. |
第四十九条 登记 1、在将附录三所列高风险人工智能系统投放到市场或投入使用之前,除附录三第2条所述高风险人工智能系统外,提供者或授权代表(如适用)应当在本法第七十一条所规定的欧盟数据库完成其自身及其系统的登记。 2、在将人工智能系统投放到市场或投入使用之前,如果提供者根据本法第六条第3款认定该系统不具有高风险,则该提供者或授权代表(如适用)应当在本法第七十一条所规定的欧盟数据库中完成其自身及该系统的登记。 3、在投入使用或使用附录三所列高风险人工智能系统之前,除附录三第2条所列高风险人工智能系统外,公权力机关、欧盟各机构或其授权代表部署者应当本法第七十一条所规定的欧盟数据库中登记、选定该系统并登记其使用情况。 4、对于本法附录三第1条、第6条和第7条所述高风险人工智能系统,在执法、移民、政治庇护和边境管制管理领域,本条第1款、第2款和第3款所述登记应当在第七十一条所规定欧盟数据库的安全且非公开板块进行,并应当仅登记以下信息(如适用): (a)附录八第A条第1款至第10款中,除第6款、第8款和第9款以外的部分; (b)附录八第B条第1款至第5款、第8款和第9款; (c)附录八第C条第1款至第3款; (d)附录九第1款、第2款、第3款和第5款。 仅当本法第七十四条第8款项下欧盟委员会和国家主管机关才能访问前段所列欧盟数据库中相应受限制板块。 5、附录三第2条所述高风险人工智能系统应当办理国家级登记。 |
CHAPTER IV TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMS |
第四章特定人工智能系统的提供者和部署者的透明度义务 |
Article 50 Transparency obligations for providers and deployers of certain AI systems 1. Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence. 2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. 3. Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law. 4. Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work. Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content. 5. The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements. 6. Paragraphs 1 to 4 shall not affect the requirements and obligations set out in Chapter III, and shall be without prejudice to other transparency obligations laid down in Union or national law for deployers of AI systems. 7. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content. The Commission may adopt implementing acts to approve those codes of practice in accordance with the procedure laid down in Article 56 (6). If it deems the code is not adequate, the Commission may adopt an implementing act specifying common rules for the implementation of those obligations in accordance with the examination procedure laid down in Article 98(2). |
第五十条 特定人工智能系统的提供者和部署者的透明度义务 1、提供者应当确保用于和自然人直接互动的人工智能系统的设计和开发方式使相关自然人知道他们正在与人工智能系统互动,但是,考虑到系统的使用情况和场景,从一个知情、观察力敏锐且谨慎的自然人的角度看上述情况是显而易见的除外。上述义务不适用于经法律授权用于检测、预防、调查或公诉刑事犯罪并已经适当地保障第三方权利和自由的人工智能系统,但公众可利用该等系统举报犯罪行为的除外。 2、生成合成音频、图像、视频或文本内容的人工智能系统(包括通用人工智能系统)的提供者,应当确保该等人工智能系统的输出以机器可读格式标记,并可以被检测出是系统生成或操纵的。提供者应当确保其技术解决方案在技术上可行的情况下有效、可互操作、稳健且可靠,同时考虑到相关技术标准中可能反映的各类内容的特殊性和局限性、实施成本和公认的最新技术。如果人工智能系统为标准的编辑提供辅助功能,或者不会实质上改变部署者提供的输入数据或其语义,或者经法律授权被用于检测、预防、调查或起诉刑事犯罪,则不适用该项义务。 3、情绪识别系统或生物识别分类系统的部署者应当向接触该系统的自然人告知该系统的运行情况,并应当根据第2016/679号和第2018/1725号条例以及第16/680号指令(如适用)处理个人数据。该项义务不适用于经法律允许用于检测、预防或调查刑事犯罪且对第三方的权利和自由采取适当的保障措施并符合欧盟法律的生物特征分类和情感识别人工智能系统。 4、生成或篡改的图像、音频或视频内容构成深度伪造的人工智能系统的部署者应当披露该等内容是系统生成或操纵的。如果经法律授权将该系统用于发现、预防、调查或起诉刑事犯罪,则不适用该项义务。如果上述内容成为明显具有艺术性、创造性、讽刺性、虚构性或类似性质的作品或节目的一部分,本款规定的透明度义务则仅限于以不妨碍作品展示或欣赏的恰当方式披露上述生成或操纵内容存在这一事实。 生成或操纵文本的人工智能系统的部署者应当披露该等文本是系统生成或被操纵的。如果经法律授权将该等系统用于检测、预防、调查或起诉刑事犯罪,或者上述人工智能生成内容经过了人工审查或编辑控制,并且有自然人或法人对上述内容的发布负有编辑责任,则不适用该项义务。 5、本条第1款至第4款所述信息最迟应在第一次与自然人互动或接触时以清晰可辨的方式提供给相应自然人。信息应符合所适用的无障碍访问要求。 6、本条第1款至第4款不影响本法第三章规定的要求和义务,也不影响欧盟或成员国法律规定人工智能系统部署者应当承担的其他透明度义务。 7、人工智能办公室应当鼓励和推动在欧盟层面制定业务守则,以促进有效落实对系统生成或操纵内容的相关检测和标记义务。欧盟委员会可根据本法第五十六条第6款规定的程序制定实施细则批准上述业务守则。如果欧盟委员会认为上述业务守则不够充分,可以根据本法第九十八条第2款规定的审查程序制定一份实施细则,细化履行上述义务的一般规则。 |
CHAPTER V GENERAL-PURPOSE AI MODELS |
第五章通用人工智能模型 |
SECTION 1 Classification rules |
第一节 分类规则 |
Article 51 Classification of general-purpose AI models as general-purpose AI models with systemic risk 1. A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions: (a)it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; (b) based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII. 2. A general-purpose AI model shall be presumed to have high impact capabilities pursuant to paragraph 1, point (a), when the cumulative amount of computation used for its training measured in floating point operations is greater than 1025. 3. The Commission shall adopt delegated acts in accordance with Article 97 to amend the thresholds listed in paragraphs 1 and 2 of this Article, as well as to supplement benchmarks and indicators in light of evolving technological developments, such as algorithmic improvements or increased hardware efficiency, when necessary, for these thresholds to reflect the state of the art. |
第五十一条 通用人工智能模型被归类为具有系统性风险的通用人工智能模型 1、符合以下任一条件的通用人工智能模型,应当被归类为具有系统性风险的通用人工智能模型: (a)经适当的技术工具和方法(包括指标和基准)评估其具有高影响能力; (b)根据欧盟委员会依职权作出的决定或在科学小组发出合格预警后,按附录十三所列标准,它具有与本款(a)项所述相当的能力或影响。 2、在浮点运算中测量的通用人工智能模型训练所使用计算量累计大于1025时,该通用人工智能模型应当按本条第1款(a)项规定被视为具有高影响力。 3、欧盟委员会应当根据本法第九十七条制定规章条例,修改本条第1款和第2款所述阈值,并在必要时根据不断演变的技术发展(如算法改进或硬件效率提高)补充基准和指标,使阈值反映最新的技术水平。 |
Article 52 Procedure 1. Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant provider shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification shall include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk. 2. The provider of a general-purpose AI model that meets the condition referred to in Article 51(1), point (a), may present, with its notification, sufficiently substantiated arguments to demonstrate that, exceptionally, although it meets that requirement, the general-purpose AI model does not present, due to its specific characteristics, systemic risks and therefore should not be classified as a general-purpose AI model with systemic risk. 3. Where the Commission concludes that the arguments submitted pursuant to paragraph 2 are not sufficiently substantiated and the relevant provider was not able to demonstrate that the general-purpose AI model does not present, due to its specific characteristics, systemic risks, it shall reject those arguments, and the general-purpose AI model shall be considered to be a general-purpose AI model with systemic risk. 4. The Commission may designate a general-purpose AI model as presenting systemic risks, ex officio or following a qualified alert from the scientific panel pursuant to Article 90(1), point (a), on the basis of criteria set out in Annex XIII. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex XIII by specifying and updating the criteria set out in that Annex. 5. Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such a request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision. 6. The Commission shall ensure that a list of general-purpose AI models with systemic risk is published and shall keep that list up to date, without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law. |
第五十二条 程序 1、如果通用人工智能模型符合本法第五十一条第1款(a)项所述条件,其提供者应当尽快通知欧盟委员会,最迟不晚于模型符合上述条件或知道模型将符合上述条件后两周内。通知应当包括证明模型已满足相关要求所需的信息。如果欧盟委员会虽然没有收到上述通知,但意识到通用人工智能模型存在系统性风险,可以依职权决定将该模型指定为具有系统性风险的模型。 2、符合本法第五十一条第1款(a)项所述条件的通用人工智能模型的提供者可以在其通知中提出充足理由主张,尽管上述通用人工智能模型符合本法第五十一条第1款(a)项要求,但在特定情况下,因其所具备的某些特征使其并不具有系统性风险,因此不应被归类为具有系统性风险的通用人工智能模型。 3、如果欧盟委员会认为,提供者根据本条第2款提出的主张理由不充分,相关提供者无法证明相应通用人工智能模型因具备某些特殊特征而不存在系统性风险,欧盟委员会应当驳回其主张,该等通用人工智能模型应当被视为具有系统性风险的通用人工智能模型。 4、欧盟委员会可以根据本法附录十三所列标准,依职权或在科学小组根据本法第九十条第1款(a)项规定发出合格预警后,指定相应通用人工智能模型为存在系统性风险的通用人工智能模型。 欧盟委员会有权根据本法第九十七条规定制定规章条例,通过新制定和更新附录十三中所列标准来修订附录十三。 5、提供者的模型根据本条第4款规定被指定为具有系统性风险的通用人工智能模型的,提供者可以提出合理(重新评估)申请,欧盟委员会应当考虑该申请,并可以根据附录十三所列标准重新评估上述通用人工智能模型是否仍可以被视为存在系统性风险。提供者的上述申请应当包含(欧盟委员会)作出指定决定以后出现的客观、详细和新理由。提供者最早可以在(欧盟委员会)作出指定决定后六个月内申请重新评估。如果欧盟委员会在重新评估后决定维持其作为具有系统性风险的通用人工智能模型的指定的,提供者可以在该决定作出后六个月内要求重新评估。 6、在不违背根据欧盟和成员国法律遵守和保护知识产权和机密商业信息或商业秘密的需要的情况下,欧盟委员会应当确保公布并持续更新一份具有系统性风险的通用人工智能模型清单。 |
SECTION 2 Obligations for providers of general-purpose AI models |
第二节 通用人工智能模型提供者的义务 |
Article 53 Obligations for providers of general-purpose AI models 1.Providers of general-purpose AI models shall: (a) draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the information set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities; (b) draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: (i) enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and (ii)contain, at a minimum, the elements set out in Annex XII; (c) put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790; (d) draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office. 2. The obligations set out in paragraph 1, points (a) and (b), shall not apply to providers of AI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. This exception shall not apply to general-purpose AI models with systemic risks. 3. Providers of general-purpose AI models shall cooperate as necessary with the Commission and the national competent authorities in the exercise of their competences and powers pursuant to this Regulation. 4. Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. 5. For the purpose of facilitating compliance with Annex XI, in particular points 2 (d) and (e) thereof, the Commission is empowered to adopt delegated acts in accordance with Article 97 to detail measurement and calculation methodologies with a view to allowing for comparable and verifiable documentation. 6.The Commission is empowered to adopt delegated acts in accordance with Article 97(2) to amend Annexes XI and XII in light of evolving technological developments. 7.Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in Article 78. |
第五十三条 通用人工智能模型提供者的义务 1、通用人工智能模型的提供者承担以下义务: (a)起草并更新模型的技术文档,包括模型训练和测试过程以及评估结果,其中至少应当包含本法附录十一所列信息,以便按要求向人工智能办公室和国家主管机关提供; (b)起草、更新并向计划将通用人工智能模型集成到其人工智能系统中的人工智能系统提供者提供信息和文档。在不影响根据欧盟和成员国国内法遵守和保护知识产权及机密商业信息或商业秘密的情况下,上述信息和文档应当符合下列条件: (i)使人工智能系统提供者能够很好地理解通用人工智能模型的能力和局限性,并履行本法规定的义务;并 (ii)至少包含本法附录十二所列要素; (a)按欧盟版权和相关权利法制定一项政策,特别是要通过最先进的技术识别和遵守按第2019/790号指令第4条第(3)条规定作出的权利保留; (b)根据人工智能办公室提供的模板,起草并发布一份足够详尽的关于通用人工智能模型训练内容的摘要。 2、适用允许访问、使用、修改和分发模型的自由和开源许可证发布的、参数(包括权重、模型架构信息和模型使用信息)已被公开的人工智能模型的提供者,无需承担本条第1款(a)项和(b)项规定的义务。但该模型是具有系统性风险的通用人工智能模型的除外。 3、通用人工智能模型的提供者应当在必要时与欧盟委员会和国内主管机关合作,根据本法规定行使其职权。 4、在发布统一标准之前,通用人工智能模型的提供者可以凭借本法第五十六条项下业务守则证明其已履行本条第1款规定的义务。提供者遵守已经涵盖上述义务的欧洲统一标准的,可以推定其符合上述规定。未遵守经批准的业务守则或者不符合欧洲统一标准的通用人工智能模型的提供者应当证明其已经采取了其他的适当合规措施,由欧盟委员会评估。 5、为促进本法附录十一(尤其是其中的第2条(d)项和(e)项)的落实,欧盟委员会有权根据本法第九十七条规定制定规章条例,详细说明计量和计算方法,提供可比较和可核查的文档。 6、欧盟委员会有权根据本法第九十七条第(2)款规定制定规章条例,随技术发展演变修订本法附录十一和附录十二。 7、根据本条规定获得的信息或文档(包括商业秘密),均应按照本法第七十八条规定作保密处理。 |
Article 54 Authorised representatives of providers of general-purpose AI models 1. Prior to placing a general-purpose AI model on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union. 2. The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider. 3. The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the Union. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: (a)verify that the technical documentation specified in Annex XI has been drawn up and all obligations referred to in Article 53 and, where applicable, Article 55 have been fulfilled by the provider; (b)keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and national competent authorities, for a period of 10 years after the general-purpose AI model has been placed on the market, and the contact details of the provider that appointed the authorised representative; (c)provide the AI Office, upon a reasoned request, with all the information and documentation, including that referred to in point (b), necessary to demonstrate compliance with the obligations in this Chapter; (d)cooperate with the AI Office and competent authorities, upon a reasoned request, in any action they take in relation to the general-purpose AI model, including when the model is integrated into AI systems placed on the market or put into service in the Union. 4. The mandate shall empower the authorised representative to be addressed, in addition to or instead of the provider, by the AI Office or the competent authorities, on all issues related to ensuring compliance with this Regulation. 5. The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation.In such a case, it shall also immediately inform the AI Office about the termination of the mandate and the reasons therefor. 6. The obligation set out in this Article shall not apply to providers of general-purpose AI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available, unless the general-purpose AI models present systemic risks. |
第五十四条 通用人工智能模型提供者的授权代表 1、在将通用人工智能模型投放到欧盟市场之前,在第三国设立的提供者应当通过书面授权任命一名在欧盟设立的授权代表。 2、提供者应当确保其授权代表能够执行提供者所出具授权书中规定的任务。 3、授权代表应当执行提供者授权书中规定的任务。按要求使用欧盟机构的一种官方语言向人工智能办公室提供一份授权书副本。仅就本法而言,提供者应当在授权书中委托授权代表执行以下任务: (a)核实本法附录十一中规定的技术文档是否已经拟定,提供者是否已经履行本法第五十三条和第五十五条(如适用)所规定的全部义务; (b)在通用人工智能模型被投放到市场后的10年内,保存一份本法附录十一中规定的技术文档副本以及指定授权代表的提供者的联系方式,供人工智能办公室和国家主管机关使用; (c)根据人工智能办公室的合理要求,向其提供所有必要的信息和文档,包括本款(b)项所规定的信息和文档,以证明提供者已经履行本章规定的义务; (d)根据人工智能办公室和主管机关的要求配合其采取与通用人工智能模型有关的任何行动,包括将该模型集成到已经被投放到欧盟市场或投入使用的人工智能系统中时。 4、授权书应当载明,授权代表是除提供者以外或代替提供者,解决主管机关提出的、确保提供者遵守本法规定所涉及的所有问题。 5、如果授权代表认为或有理由认为提供者的行为违反本法规定的义务,应当终止授权。并应当立即将授权的终止及其原因通知人工智能办公室。 6、适用允许访问、使用、修改和分发模型的自由和开源许可证发布的、参数(包括权重、模型架构信息和模型使用信息)已被公开的人工智能模型的提供者,无需承担本条项下义务。但该模型是具有系统性风险的通用人工智能模型的除外。
|
SECTION 3 Obligations of providers of general-purpose AI models with systemic risk |
第三节 具有系统性风险的通用人工智能模型的提供者的义务 |
Article 55 Obligations of providers of general-purpose AI models with systemic risk 1. In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: (a)perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks; (b)assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk; (c)keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them; (d)ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model. 2.Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. 3. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in Article 78. |
第五十五条 具有系统性风险的通用人工智能模型提供者的义务 1、除本法第五十三条和第五十四条所规定义务外,具有系统性风险的通用人工智能模型的提供者还应当承担以下义务: (a)根据反映最新技术的标准化规程和工具进行模型评估,包括开展和记录模型的对抗性测试,以识别和降低系统性风险; (b)评估和降低欧盟层面可能存在的系统性风险(包括风险来源),它们可能源于具有系统性风险的通用人工智能模型的开发、上市或使用; (c)及时跟踪、记录并向人工智能办公室报告重大事件的相关信息和可能采取的纠正措施,并视情况向国家主管机关报告; (d)确保具有系统性风险的通用人工智能模型及其物理基础设施得到足够的网络安全保护。 2、在统一标准发布之前,具有系统性风险的通用人工智能模型的提供者可以凭借本法第五十六条项下业务守则证明其遵守了本条第1款规定的义务。遵守涵盖上述义务的欧洲统一标准的提供者可以被推定为履行了上述义务。具有系统性风险的通用人工智能模型的提供者,如果未遵守经批准的业务守则或不符合欧洲统一标准,则应当举证证明其已经采取其他适当的合规手段,由欧盟委员会评估。 3、根据本条规定获取的任何信息或文档(包括商业秘密),均应根据本法第七十八条规定作保密处理。 |
SECTION 4 Codes of practice |
第四节 业务守则 |
Article 56 Codes of practice 1. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level in order to contribute to the proper application of this Regulation, taking into account international approaches. 2. The AI Office and the Board shall aim to ensure that the codes of practice cover at least the obligations provided for in Articles 53 and 55, including the following issues: (a)the means to ensure that the information referred to in Article 53(1), points (a) and (b), is kept up to date in light of market and technological developments; (b)the adequate level of detail for the summary about the content used for training; (c)the identification of the type and nature of the systemic risks at Union level, including their sources, where appropriate; (d)the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, including the documentation thereof, which shall be proportionate to the risks, take into consideration their severity and probability and take into account the specific challenges of tackling those risks in light of the possible ways in which such risks may emerge and materialise along the AI value chain. 3. The AI Office may invite all providers of general-purpose AI models, as well as relevant national competent authorities, to participate in the drawing-up of codes of practice. Civil society organisations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts, may support the process. 4. The AI Office and the Board shall aim to ensure that the codes of practice clearly set out their specific objectives and contain commitments or measures, including key performance indicators as appropriate, to ensure the achievement of those objectives, and that they take due account of the needs and interests of all interested parties, including affected persons, at Union level. 5. The AI Office shall aim to ensure that participants to the codes of practice report regularly to the AI Office on the implementation of the commitments and the measures taken and their outcomes, including as measured against the key performance indicators as appropriate. Key performance indicators and reporting commitments shall reflect differences in size and capacity between various participants. 6. The AI Office and the Board shall regularly monitor and evaluate the achievement of the objectives of the codes of practice by the participants and their contribution to the proper application of this Regulation. The AI Office and the Board shall assess whether the codes of practice cover the obligations provided for in Articles 53 and 55, and shall regularly monitor and evaluate the achievement of their objectives. They shall publish their assessment of the adequacy of the codes of practice. The Commission may, by way of an implementing act, approve a code of practice and give it a general validity within the Union. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2). 7. The AI Office may invite all providers of general-purpose AI models to adhere to the codes of practice. For providers of general-purpose AI models not presenting systemic risks this adherence may be limited to the obligations provided for in Article 53, unless they declare explicitly their interest to join the full code. 8. The AI Office shall, as appropriate, also encourage and facilitate the review and adaptation of the codes of practice, in particular in light of emerging standards. The AI Office shall assist in the assessment of available standards. 9. Codes of practice shall be ready at the latest by 2 May 2025. The AI Office shall take the necessary steps, including inviting providers pursuant to paragraph 7. 10. If, by 2 August 2025, a code of practice cannot be finalised, or if the AI Office deems it is not adequate following its assessment under paragraph 6 of this Article, the Commission may provide, by means of implementing acts, common rules for the implementation of the obligations provided for in Articles 53 and 55, including the issues set out in paragraph 2 of this Article. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2). |
第五十六条 业务守则 1、人工智能办公室应当考虑国际经验,鼓励和推动在欧盟层面制定业务守则,以促进本法的正确适用。 2、人工智能办公室和人工智能委员会应当致力于确保业务守则至少涵盖本法第五十三条和第五十五条规定的义务,包括以下事项: (a)确保本法第五十三条第(1)款(a)项和(b)项所述信息随市场和技术发展保持最新的方法; (b)训练内容摘要的详细程度; (c)识别欧盟层面的系统性风险的类型和性质,包括其来源(如适用); (d)欧盟层面的系统性风险评估和管理措施、程序和方式(包括相关文件),应当与风险相匹配,应当考虑风险的严重性和发生可能性,并根据人工智能价值链上可能出现和发生该等风险的方式考虑应对该等风险的具体挑战。 3、人工智能办公室可以邀请通用人工智能模型的所有提供者和相关国家主管机关参与制定业务守则。民间社会组织、行业界、学术界和其他利益相关方(如下游提供者和独立专家)可能会参与该过程。 4、人工智能办公室和人工智能委员会应当旨在确保业务守则中明确规定其具体目标,并包含确保目标实现的承诺或措施(如适当的关键绩效指标),并适当考虑包括受影响人群在内的所有利益相关方在联盟层面的需求和利益。 5、人工智能办公室应当确保业务守则的参与者定期向人工智能办公室报告承诺的执行情况、采取的措施及其结果(包括视情况根据关键绩效指标进行的衡量)。关键绩效指标和报告承诺应当反映不同参与者的规模和能力差异。 6、人工智能办公室和人工智能委员会应当定期监测和评估参与者实现业务守则目标的进展情况及其对正确适用本法的情况。人工智能办公室和人工智能委员会应当评估业务守则是否涵盖本法第五十三条和第五十五条规定的义务,并应当定期监测和评估其目标的实现情况。并应当公布对业务守则充分性的评估。 欧盟委员会可以通过制定实施细则批准业务守则,并赋予其在欧盟境内的普遍约束力。该实施细则应当按照本法第九十八条第2款规定的审查程序经审查通过。 7、人工智能办公室可以邀请所有通用人工智能模型的提供者遵守业务守则。无系统性风险的通用人工智能模型的提供者可以仅遵守本法第五十三条规定的义务,但其明确表示有愿意遵守完整业务守则的除外。 8、人工智能办公室还应当视情况鼓励和促进对业务守则的审查和调整,特别是根据新出现的标准进行审查和调整。人工智能办公室应当协助评估可获得的标准。 9、业务守则最晚应当在2025年5月2日前制定完成。人工智能办公室应当采取相应的必要措施(包括根据本条第7款规定邀请提供者)。 10、如果未能在2025年8月2日之前完成业务守则,或者人工智能办公室根据本条第6款进行评估后认为业务守则不充分,欧盟委员会可以通过制定实施细则为履行第五十三条和第五十五条规定的义务提供共同规则,包括本条第2款规定的事项。该实施细则应当按照本法第九十八条第2款所规定的审查程序经审查通过。 |
CHAPTER VI MEASURES IN SUPPORT OF INNOVATION |
第六章 创新支持措施 |
Article 57 AI regulatory sandboxes 1. Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational by 2 August 2026. That sandbox may also be established jointly with the competent authorities of other Member States. The Commission may provide technical support, advice and tools for the establishment and operation of AI regulatory sandboxes. The obligation under the first subparagraph may also be fulfilled by participating in an existing sandbox in so far as that participation provides an equivalent level of national coverage for the participating Member States. 2. Additional AI regulatory sandboxes at regional or local level, or established jointly with the competent authorities of other Member States may also be established. 3. The European Data Protection Supervisor may also establish an AI regulatory sandbox for Union institutions, bodies, offices and agencies, and may exercise the roles and the tasks of national competent authorities in accordance with this Chapter. 4. Member States shall ensure that the competent authorities referred to in paragraphs 1 and 2 allocate sufficient resources to comply with this Article effectively and in a timely manner. Where appropriate, national competent authorities shall cooperate with other relevant authorities, and may allow for the involvement of other actors within the AI ecosystem. This Article shall not affect other regulatory sandboxes established under Union or national law. Member States shall ensure an appropriate level of cooperation between the authorities supervising those other sandboxes and the national competent authorities. 5. AI regulatory sandboxes established under paragraph 1 shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the providers or prospective providers and the competent authority. Such sandboxes may include testing in real world conditions supervised therein. 6. Competent authorities shall provide, as appropriate, guidance, supervision and support within the AI regulatory sandbox with a view to identifying risks, in particular to fundamental rights, health and safety, testing, mitigation measures, and their effectiveness in relation to the obligations and requirements of this Regulation and, where relevant, other Union and national law supervised within the sandbox. 7. Competent authorities shall provide providers and prospective providers participating in the AI regulatory sandbox with guidance on regulatory expectations and how to fulfil the requirements and obligations set out in this Regulation. Upon request of the provider or prospective provider of the AI system, the competent authority shall provide a written proof of the activities successfully carried out in the sandbox. The competent authority shall also provide an exit report detailing the activities carried out in the sandbox and the related results and learning outcomes. Providers may use such documentation to demonstrate their compliance with this Regulation through the conformity assessment process or relevant market surveillance activities. In this regard, the exit reports and the written proof provided by the national competent authority shall be taken positively into account by market surveillance authorities and notified bodies, with a view to accelerating conformity assessment procedures to a reasonable extent. 8. Subject to the confidentiality provisions in Article 78, and with the agreement of the provider or prospective provider, the Commission and the Board shall be authorised to access the exit reports and shall take them into account, as appropriate, when exercising their tasks under this Regulation. If both the provider or prospective provider and the national competent authority explicitly agree, the exit report may be made publicly available through the single information platform referred to in this Article. 9. The establishment of AI regulatory sandboxes shall aim to contribute to the following objectives: (a) improving legal certainty to achieve regulatory compliance with this Regulation or, where relevant, other applicable Union and national law; (b) supporting the sharing of best practices through cooperation with the authorities involved in the AI regulatory sandbox; (c) fostering innovation and competitiveness and facilitating the development of an AI ecosystem; (d) contributing to evidence-based regulatory learning; (e) facilitating and accelerating access to the Union market for AI systems, in particular when provided by SMEs, including start-ups. 10. National competent authorities shall ensure that, to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national or competent authorities are associated with the operation of the AI regulatory sandbox and involved in the supervision of those aspects to the extent of their respective tasks and powers. 11.The AI regulatory sandboxes shall not affect the supervisory or corrective powers of the competent authorities supervising the sandboxes, including at regional or local level. Any significant risks to health and safety and fundamental rights identified during the development and testing of such AI systems shall result in an adequate mitigation. National competent authorities shall have the power to temporarily or permanently suspend the testing process, or the participation in the sandbox if no effective mitigation is possible, and shall inform the AI Office of such decision. National competent authorities shall exercise their supervisory powers within the limits of the relevant law, using their discretionary powers when implementing legal provisions in respect of a specific AI regulatory sandbox project, with the objective of supporting innovation in AI in the Union. 12. Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. Where other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no administrative fines shall be imposed regarding that law. 13. The AI regulatory sandboxes shall be designed and implemented in such a way that, where relevant, they facilitate cross-border cooperation between national competent authorities. 14. National competent authorities shall coordinate their activities and cooperate within the framework of the Board. 15. National competent authorities shall inform the AI Office and the Board of the establishment of a sandbox, and may ask them for support and guidance. The AI Office shall make publicly available a list of planned and existing sandboxes and keep it up to date in order to encourage more interaction in the AI regulatory sandboxes and cross-border cooperation. 16. National competent authorities shall submit annual reports to the AI Office and to the Board, from one year after the establishment of the AI regulatory sandbox and every year thereafter until its termination, and a final report. Those reports shall provide information on the progress and results of the implementation of those sandboxes, including best practices, incidents, lessons learnt and recommendations on their setup and, where relevant, on the application and possible revision of this Regulation, including its delegated and implementing acts, and on the application of other Union law supervised by the competent authorities within the sandbox. The national competent authorities shall make those annual reports or abstracts thereof available to the public, online. The Commission shall, where appropriate, take the annual reports into account when exercising its tasks under this Regulation. 17. The Commission shall develop a single and dedicated interface containing all relevant information related to AI regulatory sandboxes to allow stakeholders to interact with AI regulatory sandboxes and to raise enquiries with competent authorities, and to seek non-binding guidance on the conformity of innovative products, services, business models embedding AI technologies, in accordance with Article 62(1), point (c). The Commission shall proactively coordinate with national competent authorities, where relevant. |
第五十七条 人工智能监管沙盒 1、各成员国应当确保其主管机关在国家层面建立并在2026年8月2日前投入使用至少一个人工智能监管沙盒。监管沙盒也可以由多个成员国主管机关共同建立。欧盟委员会可以为人工智能监管沙盒的建立和运行提供技术支持、建议和工具。 如果加入既有的监管沙盒能够为加入的成员国提供同等水平的国内监管,本款前一项规定的义务也可以通过加入既有的监管沙盒来履行。 2、成员国也可以在区域或地方层面另外建立人工智能监管沙盒,或与其他成员国主管机关联合建立。 3、欧洲数据保护监督员也可以为欧盟各机构建立一个人工智能监管沙盒,并可以根据本章规定行使国家主管机关的职能和任务。 4、各成员国应当确保为本条第1款和第2款项下主管机关配备充足的资源,支持其有效和及时地履行本条义务。国家主管机关应当适时与其他相关机构合作,也可以允许人工智能生态系统中的其他行为者参与。本条不影响根据欧盟或成员国法律建立的其他监管沙盒。各成员国应当确保上述其他监管沙盒的监管机构与人工智能监管沙盒的国家主管机关之间进行适当程度的合作。 5、根据本条第1款建立的人工智能监管沙盒,应当在按照提供者或潜在提供者与主管机关商定的具体沙盒计划将创新人工智能系统投放到市场或投入使用之前,提供一个鼓励创新并在限定时间内促进其开发、训练、测试和验证的受控环境。此类沙盒可能包括在受监督的真实环境条件下进行测试。 6、主管机关应当视情况在人工智能监管沙盒中提供指导、监督和支持以识别风险,特别是对基本权利、健康和安全、测试、风险缓解措施及其与本法项下义务和要求的有效性、在监管沙盒内监督的其他欧盟和成员国法律(如有关)造成的风险。 7、主管机关应当向参与人工智能监管沙盒的提供者和潜在提供者提供监管预期以及履行本法所规定要求和义务的指导。 主管机关应当根据人工智能系统提供者或潜在提供者的要求,向其提供在监管沙盒中成功开展活动的书面证明。主管机关还应当提供一份出盒报告(exit report),详细说明在监管沙盒中开展的活动以及相关结果和学习成果。提供者可以在符合性评定程序或相关市场监管活动中使用该等文件证明其符合本法规定。对此,市场监管机构和评定机构应当认可国家主管机关提供的出盒报告和书面证明,以期在合理范围内加快符合性评定程序。 8、根据本法第七十八条保密规定并经提供者或潜在提供者同意,欧盟委员会和人工智能委员会应当有权获取出盒报告,并在根据本法规定执行任务时酌情考虑该等报告内容。经提供者或潜在提供者和国家主管机关明确同意后,出盒报告可以通过本条(第17款)项下的单一信息平台公开。 9、建立人工智能监管沙盒的目的是推动实现以下目标: (a)提高法律的确定性,以实现对本法或其他适用的欧盟和成员国法律的监管合规目标; (b)通过与加入人工智能监管沙盒的主管机关合作,支持分享最佳实践; (c)鼓励创新和培养竞争力,促进人工智能生态系统的发展; (d)促进循证监管学习; (e)推动和加速人工智能系统进入欧盟市场,特别是由初创企业等中小企业提供的人工智能系统。 10、国家主管机关应当确保,但创新型人工智能系统涉及个人数据处理或以其他方式受到其他国家当局或主管机关对提供或支持数据访问的机关范围时,国家数据保护机关和其他国家或主管机关应当与人工智能监管沙盒的运作结合,并在各自的任务和权力范围内参与这方面的监督。 11、人工智能监管沙盒不得影响其主管机关行使对沙盒的监督或纠正权力,包括在地区或地方层面。在开发和测试此类人工智能系统期间发现的任何对健康、安全和基本权利的重大风险都应当被充分缓解。如果无法有效缓解该等风险,国家主管机关有权暂时或永久关停测试程序或加入沙盒,并应当将该项决定通知人工智能办公室。国家主管机关应当在相关法律规定的范围内行使其监督权,在执行特定人工智能监管沙盒项目的法律规定时行使自由裁量权,支持欧盟的人工智能创新。 12、根据适用的欧盟和国家赔偿法,参与人工智能监管沙河的提供者和潜在提供者应当就沙盒中进行的实验对第三方造成的任何损害承担责任。但是,潜在提供者主要遵守相应的(沙盒)计划和参与条款及条件,并真诚地遵守国家主管机关的指导,主管机关就不应对其违反本法的行为处以行政罚款。如果负责其他欧盟和成员国法律的其他主管机关积极参与对沙盒中的人工智能系统的监督,并提供合规指导,则基于该法不得处以行政罚款。 13、人工智能监管沙盒的设计和实施应当确保在相关情况下推动国家主管机关之间的跨境合作。 14、国家主管机关应当在人工智能委员会框架内协调其活动并开展合作。 15、国家主管机关应当将沙盒的建立情况告知人工智能办公室和人工智能委员会,并可以请求其提供支持和指导。人工智能办公室应当公开并持续一份计划和现有沙盒的清单,以鼓励人工智能监管沙盒和跨境合作方面的更多互动。 16、自人工智能监管沙盒建立满一年后,国家主管机关应当每年向人工智能办公室和人工智能委员会提交年度报告,并在沙盒终止时提交最终报告。该等报告应当包含该等沙盒的实施进展和结果有关的信息(包括最佳实践、事件、经验教训和关于其设置的建议),有关本法在相关情况下的适用和可能的修订(包括其规章条例和实施细则),以及沙盒内主管机关监督的其他欧盟法律的适用情况。国家主管机关应当在网上向公众公开上述年度报告或其摘要。欧盟委员会在根据本法规定行使职权时,应当酌情考虑上述年度报告。 17、欧盟委员会应当开发一个单一的专用平台界面,其中包含与人工智能监管沙盒有关的所有信息,以便利益相关者与人工智能管理沙盒互动、向主管机关提问,并根据本法第六十二条第1款(c)项规定,就嵌入人工智能技术的创新产品、服务、商业模式的合规性寻求不具有约束力的指导。欧盟委员会应当在适当时积极与国家主管机关协作。 |
Article 58 Detailed arrangements for, and functioning of, AI regulatory sandboxes 1.In order to avoid fragmentation across the Union, the Commission shall adopt implementing acts specifying the detailed arrangements for the establishment, development, implementation, operation and supervision of the AI regulatory sandboxes. The implementing acts shall include common principles on the following issues: (a) eligibility and selection criteria for participation in the AI regulatory sandbox; (b) procedures for the application, participation, monitoring, exiting from and termination of the AI regulatory sandbox, including the sandbox plan and the exit report; (c)the terms and conditions applicable to the participants. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2). 2. The implementing acts referred to in paragraph 1 shall ensure: (a)that AI regulatory sandboxes are open to any applying provider or prospective provider of an AI system who fulfils eligibility and selection criteria, which shall be transparent and fair, and that national competent authorities inform applicants of their decision within three months of the application; (b)that AI regulatory sandboxes allow broad and equal access and keep up with demand for participation; providers and prospective providers may also submit applications in partnerships with deployers and other relevant third parties; (c)that the detailed arrangements for, and conditions concerning AI regulatory sandboxes support, to the best extent possible, flexibility for national competent authorities to establish and operate their AI regulatory sandboxes; (d)that access to the AI regulatory sandboxes is free of charge for SMEs, including start-ups, without prejudice to exceptional costs that national competent authorities may recover in a fair and proportionate manner; (e)that they facilitate providers and prospective providers, by means of the learning outcomes of the AI regulatory sandboxes, in complying with conformity assessment obligations under this Regulation and the voluntary application of the codes of conduct referred to in Article 95; (f) that AI regulatory sandboxes facilitate the involvement of other relevant actors within the AI ecosystem, such as notified bodies and standardisation organisations, SMEs, including start-ups, enterprises, innovators, testing and experimentation facilities, research and experimentation labs and European Digital Innovation Hubs, centres of excellence, individual researchers, in order to allow and facilitate cooperation with the public and private sectors; (g)that procedures, processes and administrative requirements for application, selection, participation and exiting the AI regulatory sandbox are simple, easily intelligible, and clearly communicated in order to facilitate the participation of SMEs, including start-ups, with limited legal and administrative capacities and are streamlined across the Union, in order to avoid fragmentation and that participation in an AI regulatory sandbox established by a Member State, or by the European Data Protection Supervisor is mutually and uniformly recognised and carries the same legal effects across the Union; (h)that participation in the AI regulatory sandbox is limited to a period that is appropriate to the complexity and scale of the project and that may be extended by the national competent authority; (i) that AI regulatory sandboxes facilitate the development of tools and infrastructure for testing, benchmarking, assessing and explaining dimensions of AI systems relevant for regulatory learning, such as accuracy, robustness and cybersecurity, as well as measures to mitigate risks to fundamental rights and society at large. 3. Prospective providers in the AI regulatory sandboxes, in particular SMEs and start-ups, shall be directed, where relevant, to pre-deployment services such as guidance on the implementation of this Regulation, to other value-adding services such as help with standardisation documents and certification, testing and experimentation facilities, European Digital Innovation Hubs and centres of excellence. 4. Where national competent authorities consider authorising testing in real world conditions supervised within the framework of an AI regulatory sandbox to be established under this Article, they shall specifically agree the terms and conditions of such testing and, in particular, the appropriate safeguards with the participants, with a view to protecting fundamental rights, health and safety. Where appropriate, they shall cooperate with other national competent authorities with a view to ensuring consistent practices across the Union. |
第五十八条 人工智能监管沙盒的具体安排和运作 1、为避免欧盟整体的分裂,欧盟委员会应当制定实施细则,明确人工智能监管沙盒的建立、开发、实施、运行和监督的具体安排。实施细则应当包含涉及以下问题的共同原则: (a)加入人工智能监管沙盒应当具备的资格和筛选标准; (b)人工智能监管沙盒的申请、参与、监控、退出和终止程序,包括沙盒计划和出盒报告; (c)监管沙盒参与者所适用的条款和条件。 这些实施法案应按照第98条第(2)款所述的审查程序通过。 2、本条第1款所述实施细则应当符合下列条件: (a)人工智能监管沙盒对符合规定资格和筛选标准的所有人工智能系统的申请提供者或潜在提供者开放,上述标准应当透明且公平,国家主管机关应当在收到申请后三个月内将决定通知申请人; (b)人工智能监管沙盒接受广泛和平等的参与,并能满足参与需求;提供者和潜在提供者还可以与部署者及其他相关第三方合作提交申请; (c)人工智能监管沙盒的具体安排和条件应当尽可能满足国家主管机关建立和运营其人工智能监管沙盒的灵活性需求; (d)中小企业(包括初创企业)可以免费使用人工智能监管沙盒,但国家主管机关可以以公平且成比例的方式收回额外成本; (e)通过人工智能监管沙盒的学习成果,推动提供者和潜在提供者遵守本法规定的符合性评定义务,并自愿适用本法第九十五条项下业务守则; (f)人工智能监管沙盒促进人工智能生态系统中其他相关参与者的参与,如评定机构和标准化组织、中小企业(包括初创企业、企业集团、发明人、测试和实验场所)、研究和实验室以及欧洲数字创新中心、卓越中心、个体研究人员,以支持和促进与公共和私营机构的合作; (g)申请、选择、参与和退出人工智能监管沙盒的程序、流程和管理要求应当简单明了、易于理解且明确传达到各方,以促进法律和行政能力有限的中小企业(包括初创企业)的参与;并在整个欧盟范围内进行简化,以避免碎片化,加入成员国或欧洲数据保护监督员建立的人工智能监管沙盒可以相互统一认可,在整个欧盟具有相同的法律效力; (h)参与人工智能监管沙盒的时间仅限于与项目的复杂程度和规模相适应的时期,且国家主管机关可以延长该期限; (i)人工智能监管沙盒有助于开发工具和基础设施,用于全方位测试、检测、评估和解释与监管学习相关的人工智能系统,如其准确性、稳健性和网络安全,以及缓解其对基本权利和整个社会所带来的风险的措施。 3、应当在适当时向人工智能监管沙盒中的潜在提供者(特别是中小企业和初创企业)提供部署前服务(如本法实施的指导)和其他增值服务(如标准化文件和认证、测试和实验设施、欧洲数字创新中心和卓越中心的帮助)。4、如果国家主管机关考虑批准在按本条规定建立的人工智能监管沙盒框架内、在监督下开展真实场景中测试,其应当明确同意该等测试的条款和条件,尤其是与参与者商定适当的保障措施,以保护基本权利、健康和安全。在适当的情况下,还应当与其他国家主管机关合作,确保欧盟整体的做法保持一致。
|
Article 59 Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox 1. In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: (a) AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: (i) public safety and public health, including disease detection, diagnosis prevention, control and treatment and improvement of health care systems; (ii)a high level of protection and improvement of the quality of the environment, protection of biodiversity, protection against pollution, green transition measures, climate change mitigation and adaptation measures; (iii) energy sustainability; (iv) safety and resilience of transport systems and mobility, critical infrastructure and networks; (v)efficiency and quality of public administration and public services; (b) the data processed are necessary for complying with one or more of the requirements referred to in Chapter III, Section 2 where those requirements cannot effectively be fulfilled by processing anonymised, synthetic or other non-personal data; (c) there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects, as referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise during the sandbox experimentation, as well as response mechanisms to promptly mitigate those risks and, where necessary, stop the processing; (d) any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment under the control of the prospective provider and only authorised persons have access to those data; (e) providers can further share the originally collected data only in accordance with Union data protection law; any personal data created in the sandbox cannot be shared outside the sandbox; (f) any processing of personal data in the context of the sandbox neither leads to measures or decisions affecting the data subjects nor does it affect the application of their rights laid down in Union law on the protection of personal data; (g) any personal data processed in the context of the sandbox are protected by means of appropriate technical and organisational measures and deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period; (h) the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox, unless provided otherwise by Union or national law; (i) a complete and detailed description of the process and rationale behind the training, testing and validation of the AI system is kept together with the testing results as part of the technical documentation referred to in Annex IV; (j) a short summary of the AI project developed in the sandbox, its objectives and expected results is published on the website of the competent authorities; this obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities. 2. For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security, under the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on a specific Union or national law and subject to the same cumulative conditions as referred to in paragraph 1. 3. Paragraph 1 is without prejudice to Union or national law which excludes processing of personal data for other purposes than those explicitly mentioned in that law, as well as to Union or national law laying down the basis for the processing of personal data which is necessary for the purpose of developing, testing or training of innovative AI systems or any other legal basis, in compliance with Union law on the protection of personal data. |
第五十九条 进一步处理个人数据以开发符合人工智能监管公共利益的特定人工智能系统沙盒 1、在人工智能监管沙盒中,为其他目的合法收集的个人数据可能仅用于在下列条件全部满足时,在沙盒中开发、训练和测试特定人工智能系统: (a)人工智能系统应当由公权力机关或其他自然人或法人在以下一个或多个领域开发,以维护重要公共利益: (i)公共安全和公共卫生,包括疾病检测、诊断预防、控制和治疗以及改善医疗卫生系统; (ii)高强度保护和改善环境质量、保护生物多样性、防止污染、绿色过渡措施、减缓和适应气候变化措施; (iii)能源可持续性; (iv)运输系统、交通、关键基础设施和网络的安全性和弹性; (v)公共管理和公共服务的效率和质量; (b)处理的数据对于遵守第三章第二节所规定的一项或多项要求所必要,且该等要求无法通过处理匿名、合成或其他非个人数据来有效满足; (c)有有效的监控机制识别在沙盒实验期间是否可能出现第2016/679号条例第35条和第2018/1725号条例第39条所规定的数据主体权利和自由面临的任何高风险,及时减轻该等风险并在必要时停止处理的响应机制; (d)在沙盒环境中处理的任何个人数据均处于潜在提供者控制的一个功能独立、被隔离且受保护的数据处理环境中,只有取得授权的人员才能接触到该等数据; (e)在沙盒中创建的任何个人数据都不能在沙盒外共享,提供者只能根据欧盟数据保护法进一步共享最初收集的数据; (f)在沙盒环境中处理个人数据既不会产生影响数据主体的措施或决定,也不会影响其行使欧盟个人数据保护法规定的权利; (g)在沙盒环境中处理的个人数据均被采取了适当的技术和组织措施进行保护,并在被终止参与沙盒或个人数据保留期届满后被删除; (h)除欧盟或成员国法律另有规定外,参与沙盒期间,在沙盒环境中处理个人数据的日志将被保存; (i)人工智能系统训练、测试和验证背后的过程和基本原理的完整详细描述和测试结果一起作为本法附录四所述技术文档的一部分被保存; (j)在主管机关的网站上发布在监管沙盒中开发的人工智能项目、其目标和预期结果的简短摘要,该义务不包括发布与执法、边境管制、移民或政治庇护机关活动有关的敏感业务数据。 2、为了在执法机关的控制和负责之下预防、调查、发现或公诉刑事犯罪或采取刑事处罚措施,包括预防和抵御对公共安全造成的威胁,在人工智能监管沙盒中处理个人数据应当遵守具体的欧盟或成员国法律,并受本条第1款项下各项条件的约束。 3、根据欧盟关于保护个人数据的法律,如果欧盟或成员国法律排除了为该法律所明确规定的目的以外的其他目的处理个人数据的可能性,本条第1款规定不影响该等法律的适用,也不影响为实现开发、测试或训练创新人工智能系统之目的所必须的、规定个人数据处理基础或其他法律基础的欧盟或成员国法律的适用。 |
Article 60 Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes 1. Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes may be conducted by providers or prospective providers of high-risk AI systems listed in Annex III, in accordance with this Article and the real-world testing plan referred to in this Article, without prejudice to the prohibitions under Article 5. The Commission shall, by means of implementing acts, specify the detailed elements of the real-world testing plan. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2). This paragraph shall be without prejudice to Union or national law on the testing in real world conditions of high-risk AI systems related to products covered by Union harmonisation legislation listed in Annex I. 2. Providers or prospective providers may conduct testing of high-risk AI systems referred to in Annex III in real world conditions at any time before the placing on the market or the putting into service of the AI system on their own or in partnership with one or more deployers or prospective deployers. 3. The testing of high-risk AI systems in real world conditions under this Article shall be without prejudice to any ethical review that is required by Union or national law. 4. Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: (a)the provider or prospective provider has drawn up a real-world testing plan and submitted it to the market surveillance authority in the Member State where the testing in real world conditions is to be conducted; (b)the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved; where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation; (c)the provider or prospective provider, with the exception of providers or prospective providers of high-risk AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control management, and high-risk AI systems referred to in point 2 of Annex III has registered the testing in real world conditions in accordance with Article 71(4) with a Union-wide unique single identification number and with the information specified in Annex IX; the provider or prospective provider of high-risk AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control management, has registered the testing in real-world conditions in the secure non-public section of the EU database according to Article 49(4), point (d), with a Union-wide unique single identification number and with the information specified therein; the provider or prospective provider of high-risk AI systems referred to in point 2 of Annex III has registered the testing in real-world conditions in accordance with Article 49(5); (d)the provider or prospective provider conducting the testing in real world conditions is established in the Union or has appointed a legal representative who is established in the Union; (e)data collected and processed for the purpose of the testing in real world conditions shall be transferred to third countries only provided that appropriate and applicable safeguards under Union law are implemented; (f) the testing in real world conditions does not last longer than necessary to achieve its objectives and in any case not longer than six months, which may be extended for an additional period of six months, subject to prior notification by the provider or prospective provider to the market surveillance authority, accompanied by an explanation of the need for such an extension; (g)the subjects of the testing in real world conditions who are persons belonging to vulnerable groups due to their age or disability, are appropriately protected; (h)where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more deployers or prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their decision to participate, and given the relevant instructions for use of the AI system referred to in Article 13; the provider or prospective provider and the deployer or prospective deployer shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions under this Regulation and under other applicable Union and national law; (i) the subjects of the testing in real world conditions have given informed consent in accordance with Article 61, or in the case of law enforcement, where the seeking of informed consent would prevent the AI system from being tested, the testing itself and the outcome of the testing in the real world conditions shall not have any negative effect on the subjects, and their personal data shall be deleted after the test is performed; (j) the testing in real world conditions is effectively overseen by the provider or prospective provider, as well as by deployers or prospective deployers through persons who are suitably qualified in the relevant field and have the necessary capacity, training and authority to perform their tasks; (k)the predictions, recommendationsor decisions of the AI system can be effectively reversed and disregarded. 5. Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out. 6. In accordance with Article 75, Member States shall confer on their market surveillance authorities the powers of requiring providers and prospective providers to provide information, of carrying out unannounced remote or on-site inspections, and of performing checks on the conduct of the testing in real world conditions and the related high-risk AI systems. Market surveillance authorities shall use those powers to ensure the safe development of testing in real world conditions. 7. Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. 8. Providers or prospective providers shall notify the national market surveillance authority in the Member State where the testing in real world conditions is to be conducted of the suspension or termination of the testing in real world conditions and of the final outcomes. 9. The provider or prospective provider shall be liable under applicable Union and national liability law for any damage caused in the course of their testing in real world conditions. |
第六十条 在人工智能监管沙盒之外的真实场景条件下测试高风险人工智能系统 1、本法附录三所列高风险人工智能系统的提供者或潜在提供者可以根据本条和本条所述真实场景测试计划,在不违反本法第五条禁令的情况下,在人工智能监管沙盒之外的真实场景条件下测试高风险人工智能系统。 欧盟委员会应当通过制定实施细则,明确真实场景测试计划的详细要素。该等实施细则应当按照第九十八条第2款规定的审查程序经审查通过。 本款不得违反欧盟或成员国关于在真实场景条件下测试与本法附录一所列欧盟统一立法所涵盖产品相关的高风险人工智能系统的法律。 2、提供者或潜在提供者可以在人工智能系统被投放到市场或投入使用之前的任何时候,或与部署者或潜在部署者合作,在真实场景条件下对本法附录三所述高风险人工智能系统进行测试。 3、根据本条规定在真实场景条件下测试高风险人工智能系统不得影响欧盟或成员国法律要求的任何伦理审查。 4、在以下条件均满足的情况下,提供者或潜在提供者方可在真实场景条件下进行(高风险人工智能系统)测试: (a)提供者或潜在提供者已经制定一份真实场景测试计划,并将其提交给在真实场景条件下进行测试的成员国的市场监管机构; (b)在真实场景条件下进行测试的成员国的市场监管机构已经批准(用于测试的)真实场景条件和真实场景测试计划;如果市场监管机构在(收到计划后)30天内未作出答复,应理解为在真实场景条件下的测试和真实场景测试计划已获批准;如果成员国法律并未规定默示批准,在真实场景条件下的测试仍须获得批准; (c)除本法附录三第1条、第6条和第7条所述执法、移民、庇护和边境管制管理领域的高风险人工智能系统的提供者或潜在提供者,以及附录三第2条所述的高风险人工智能系统外,提供者或潜在提供者已经根据本法第七十一条第4款规定在真实场景条件下使用全欧盟唯一的单一识别号和附录九项下信息完成测试登记;附录三第1条、第6条和第7条所述执法、移民、庇护和边境管制管理领域的高风险人工智能系统的提供者或潜在提供者已经根据本法第四十九条第4款(d)项规定在欧盟数据库的安全非公开版块登记真实场景条件下的测试,并附有全欧盟唯一的单一识别号和其中规定的信息;附录三第2条项下高风险人工智能系统的提供者或潜在提供者已经根据本法第四十九条第5款规定完成在真实场景条件下测试的注册; (d)在真实场景条件下进行测试的提供者或潜在提供者在欧盟成立,或者已经任命了一名在欧盟设立的法定代理人; (e)为在真实场景条件下进行测试而收集和处理的数据,只有在已经采取欧盟法律规定的恰当且适用的保障措施的情况下,才能被转移到第三国; (f)在真实场景条件下测试的持续时间不得超过实现其目标所需的时间,最长不超过六个月,提供者或潜在提供者事先通知市场监管机构并就延期必要性作出解释的情况下,可以再延长六个月; (g)在真实场景条件下进行测试的受试者如果因年龄或残障而属于弱势群体,应当对其采取适当的保护措施; (h)如果提供者或潜在提供者与部署者或潜在部署者合作,在真实场景条件下组织测试,部署者或潜在部署者应当已经获悉与其参与决定有关的测试的所有信息,并已经取得本法第十三条项下的人工智能系统使用说明;提供者或潜在提供者和部署者或潜在部署者应当签订一份协议,明确其各自的角色和责任,以确保遵守本法和其他适用的欧盟和成员国法律中关于在真实场景条件下进行测试的规定; (i)在真实场景条件下进行测试的受试者已经根据本法第六十一条规定出具知情同意书;或者在寻求受试者的知情同意会阻碍人工智能系统测试的执法活动中,测试本身和在真实场景条件下测试的结果不得对受试者产生任何负面影响,且测试结束后应当删除其个人数据; (j)真实场景条件下的测试由提供者或潜在提供者和部署者或潜在部署者通过在相关领域具有相应资格并具备完成任务所需相应能力、培训经验和权力的人进行有效的监督; (k)人工智能系统的预测、建议或决策可以被有效地撤销和忽略。 5、在真实场景条件下进行测试的任何受试者,或其依法指定的代表(视情况而定),在不造成任何损害的情况下,可以无理由随时撤销其知情同意意书退出测试,并有权要求立即永久删除其个人数据。撤销知情同意书不影响已经开展的活动。 6、根据本法第七十五条规定,各成员国应当授权其市场监管机构要求提供者和潜在提供者提供信息、进行突击远程或现场检查,并对在真实场景条件下进行的测试和相关高风险人工智能系统进行检查。市场监管机构应当利用上述权力确保在真实场景条件下开展的测试安全地进行。 7、如果在真实场景条件下进行测试的过程中发现的任何重大事件,应当按照本法第七十三条规定向国家市场监管管理机构报告。提供者或潜在提供者应当立即采取处理措施,否则应当暂停此次真实场景条件下的测试直至采取处理措施为止,或以其他方式终止测试。提供者或潜在提供者应当建立一套程序,支持在真实场景条件下的测试终止后迅速召回人工智能系统。 8、提供者或潜在提供者应当在暂停或终止真实场景条件下的测试时,将测试的暂停或终止及其结果通知在真实场景条件下进行测试的成员国的国家市场监管机构。 9、提供者或潜在提供者应当根据适用的欧盟和成员国责任法规定,对其在真实场景条件下进行测试的过程中造成的全部损害承担责任。 |
Article 61 Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes 1. For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: (a)the nature and objectives of the testing in real world conditions and the possible inconvenience that may be linked to their participation; (b)the conditions under which the testing in real world conditions is to be conducted, including the expected duration of the subject or subjects’ participation; (c)their rights, and the guarantees regarding their participation, in particular their right to refuse to participate in, and the right to withdraw from, testing in real world conditions at any time without any resulting detriment and without having to provide any justification; (d)the arrangements for requesting the reversal or the disregarding of the predictions, recommendations or decisions of the AI system; (e)the Union-wide unique single identification number of the testing in real world conditions in accordance with Article 60(4) point (c), and the contact details of the provider or its legal representative from whom further information can be obtained. 2. The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative. |
第六十一条 在人工智能监管沙盒之外的真实场景条件下参与测试的知情同意 1、根据本法第六十条规定在真实场景条件下进行测试,应当在受试者参与此类测试之前,已经取得关于以下内容的简洁、清晰、相关且可理解的信息后,取得受试者的知情同意: (a)在真实场景条件下进行测试的性质和目标,以及可能与其参与有关的不便; (b)在真实场景条件下进行测试的条件,包括受试者或受试者参与的预期持续时间; (c)受试者的权利及其参与的保障,特别是其在不会造成任何损害、无需任何理由的情况下随时拒绝参与和退出在真实场景条件下进行的测试的权利; (d)请求撤销或无视人工智能系统的预测、建议或决定的安排; (e)根据本法第六十条第4款(c)项规定,在真实场景条件下进行测试的全欧盟唯一的单一标识号,以及可以提供更多信息的提供者或其法定代理人的联系方式。 2、知情同意书应注明日期并记录在案,并应当向受试者或其法定代表人提供一份副本。 |
Article 62 Measures for providers and deployers, in particular SMEs, including start-ups 1. Member States shall undertake the following actions: (a)provide SMEs, including start-ups, having a registered office or a branch in the Union, with priority access to the AI regulatory sandboxes, to the extent that they fulfil the eligibility conditions and selection criteria; the priority access shall not preclude other SMEs, including start-ups, other than those referred to in this paragraph from access to the AI regulatory sandbox, provided that they also fulfil the eligibility conditions and selection criteria; (b)organise specific awareness raising and training activities on the application of this Regulation tailored to the needs of SMEs including start-ups, deployers and, as appropriate, local public authorities; (c)utilise existing dedicated channels and where appropriate, establish new ones for communication with SMEs including start-ups, deployers, other innovators and, as appropriate, local public authorities to provide advice and respond to queries about the implementation of this Regulation, including as regards participation in AI regulatory sandboxes; (d)facilitate the participation of SMEs and other relevant stakeholders in the standardisation development process. 2. The specific interests and needs of the SME providers, including start-ups, shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size, market size and other relevant indicators. 3. The AI Office shall undertake the following actions: (a)provide standardised templates for areas covered by this Regulation, as specified by the Board in its request; (b)develop and maintain a single information platform providing easy to use information in relation to this Regulation for all operators across the Union; (c)organise appropriate communication campaigns to raise awareness about the obligations arising from this Regulation; (d)evaluate and promote the convergence of best practices in public procurement procedures in relation to AI systems. |
第六十二条 针对提供者和部署者(特别是初创企业等中小企业)的措施 1.各成员国应当采取以下措施: (a)对于在欧盟拥有注册办事处或分支机构的中小企业(包括初创企业),当其符合(监管沙盒的)资格条件和筛选标准时,为其提供优先参与人工智能监管沙盒的权限;优先准入不应当排除本款规定范围外但符合资格条件和筛选标准的其他中小企业(包括初创企业)参与人工智能监管沙盒; (b)根据中小企业(包括初创企业、部署者和适当的地方公权力机关)的需要,组织关于本法适用的具体认识提升和培训活动; (c)利用现有的专用渠道,并在适当的情况下建立新渠道,与中小企业(包括初创企业、部署者、其他创新,以及适当的地方公权力机关)进行沟通,提供建议并回应有关适用本法的疑问,包括参与人工智能监管沙盒的疑问; (d)促进中小企业和其他利益相关者参与到标准化发展过程。 2、在根据本法第四十三条规定确定符合性评定费用时,应当考虑中小企业(包括初创企业)提供者的具体利益和需求,并根据其企业规模、市场规模和其他相关指标适当降低评定费用。 3、人工智能办公室应当采取以下措施: (a)按欧盟委员会的要求,为本法规定范围内的领域提供标准化模板; (b)开发和维护一个单一信息平台,为欧盟所有运营者提供与本法相关的好用的信息; (c)组织适当的交流活动,提高人们对本法所规定义务的认识; (d)评估和推动人工智能系统相关公共采购程序的最佳实践的融合。 |
Article 63 Derogations for specific operators 1. Microenterprises within the meaning of Recommendation 2003/361/EC may comply with certain elements of the quality management system required by Article 17 of this Regulation in a simplified manner, provided that they do not have partner enterprises or linked enterprises within the meaning of that Recommendation. For that purpose, the Commission shall develop guidelines on the elements of the quality management system which may be complied with in a simplified manner considering the needs of microenterprises, without affecting the level of protection or the need for compliance with the requirements in respect of high-risk AI systems. 2. Paragraph 1 of this Article shall not be interpreted as exempting those operators from fulfilling any other requirements or obligations laid down in this Regulation, including those established in Articles 9, 10, 11, 12, 13, 14, 15, 72 and 73. |
第六十三条 对特定运营者的限制 1、第2003/361/EC号建议项下微型企业不存在该建议所指的伙伴企业或关联企业的,可以简化其遵守本法第十七条项下质量管理体系特定要素的方式。为此,欧盟委员会应当考虑微型企业的需求,制定关于质量管理体系要素的指导方针,在不影响保护水平或遵守高风险人工智能系统要求的情况下,简化其遵守该等要素的方式。 2、本条第1款不应被解释为免除该等经营者履行本法规定的任何其他要求或义务,包括第9条、第10条、第11条、第12条、第13条、第14条、第15条、第72条和第73条规定的要求或义务。 |
CHAPTER VII GOVERNANCE |
第七章 治理 |
SECTION 1 Governance at Union level |
第一节 欧盟层面的治理 |
Article 64 AI Office 1. The Commission shall develop Union expertise and capabilities in the field of AI through the AI Office. 2. Member States shall facilitate the tasks entrusted to the AI Office, as reflected in this Regulation. |
第六十四条 人工智能办公室 1、欧盟委员会应当通过人工智能办公室培养欧盟在人工智能领域的专业知识和能力。 2、各成员国应当为本法反映的人工智能办公室任务的执行提供便利。 |
Article 65 Establishment and structure of the European Artificial Intelligence Board 1.A European Artificial Intelligence Board (the ‘Board’) is hereby established. 2. The Board shall be composed of one representative per Member State. The European Data Protection Supervisor shall participate as observer. The AI Office shall also attend the Board’s meetings, without taking part in the votes. Other national and Union authorities, bodies or experts may be invited to the meetings by the Board on a case by case basis, where the issues discussed are of relevance for them. 3. Each representative shall be designated by their Member State for a period of three years, renewable once. 4. Member States shall ensure that their representatives on the Board: (a)have the relevant competences and powers in their Member State so as to contribute actively to the achievement of the Board’s tasks referred to in Article 66; (b)are designated as a single contact point vis-à-vis the Board and, where appropriate, taking into account Member States’ needs, as a single contact point for stakeholders; (c) are empowered to facilitate consistency and coordination between national competent authorities in their Member State as regards the implementation of this Regulation, including through the collection of relevant data and information for the purpose of fulfilling their tasks on the Board. 5. The designated representatives of the Member States shall adopt the Board’s rules of procedure by a two-thirds majority. The rules of procedure shall, in particular, lay down procedures for the selection process, the duration of the mandate of, and specifications of the tasks of, the Chair, detailed arrangements for voting, and the organisation of the Board’s activities and those of its sub-groups. 6. The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and notifying authorities about issues related to market surveillance and notified bodies respectively. The standing sub-group for market surveillance should act as the administrative cooperation group (ADCO) for this Regulation within the meaning of Article 30 of Regulation (EU) 2019/1020. The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining specific issues. Where appropriate, representatives of the advisory forum referred to in Article 67 may be invited to such sub-groups or to specific meetings of those subgroups as observers. 7. The Board shall be organised and operated so as to safeguard the objectivity and impartiality of its activities. 8. The Board shall be chaired by one of the representatives of the Member States. The AI Office shall provide the secretariat for the Board, convene the meetings upon request of the Chair, and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and its rules of procedure. |
第六十五条 欧洲人工智能委员会的设立和结构 1.特此设立欧洲人工智能委员会(简称“人工智能委员会”)。 2、人工智能委员会应当由每个成员国委派一名代表共同组成。欧洲数据保护监督员应当作为观察员参加人工智能委员会。人工智能办公室也应当出席人工智能委员会会议,但不参与投票。人工智能委员会会议所议议题与其他国家和联盟公权力机关或专家有关时,可以视具体情况邀请该等主体参加相应人工智能委员会会议。 3、人工智能委员会的每位代表都由其成员国指定,任期三年,可连任一次。 4、各成员国应当确保其指定的人工智能委员会代表符合以下条件: (a)在其成员国拥有相应权限和权力,能够积极参与实现本法第六十六条项下人工智能委员会职能; (b)被指定为(成员国)与人工智能委员会的单一联络站,并在适当情况下考虑成员国的需求,作为利益相关方的单一联络站; (c)有权推动其成员国国家主管机关在适用本法方面保持一致性和协调性,包括通过收集相关数据和信息,履行其在人工智能委员会的职责。 5、成员国的指定代表应当根据人工智能委员会的议事规则经过三分之二以上多数通过。议事规则应当特别规定人工智能委员会主席的遴选程序、任期和职责、投票的详细安排和人工智能委员会及其小组活动的组织。 6、人工智能委员会应当设立两个常设小组,为市场监管机构和通报机构分别就市场监管和评定机构相关问题进行合作与交流提供平台。 根据第2019/1020号条例第30条规定,市场监管常设小组应当作为本法的行政合作小组(ADCO)。 人工智能委员会可酌情设立其他常设或临时小组,以审查具体问题。在适当的情况下可邀请本法第六十七条项下咨询论坛的代表作为观察员加入上述小组或小组具体会议。 7.人工智能委员会的组织和运作应当确保其活动客观其公正。 8.人工智能委员会应当由成员国代表之一担任主席。人工智能办公室应当为人工智能委员会配备秘书处,根据主席的要求召开会议,并根据本法规定的人工智能委员会职责及其议事规则准备会议议程。 |
Article 66 Tasks of the Board The Board shall advise and assist the Commission and the Member States in order to facilitate the consistent and effective application of this Regulation. To that end, the Board may in particular: (a)contribute to the coordination among national competent authorities responsible for the application of this Regulation and, in cooperation with and subject to the agreement of the market surveillance authorities concerned, support joint activities of market surveillance authorities referred to in Article 74(11); (b)collect and share technical and regulatory expertise and best practices among Member States; (c)provide advice on the implementation of this Regulation, in particular as regards the enforcement of rules on general-purpose AI models; (d)contribute to the harmonisation of administrative practices in the Member States, including in relation to the derogation from the conformity assessment procedures referred to in Article 46, the functioning of AI regulatory sandboxes, and testing in real world conditions referred to in Articles 57, 59 and 60; (e)at the request of the Commission or on its own initiative, issue recommendations and written opinions on any relevant matters related to the implementation of this Regulation and to its consistent and effective application, including: (i) on the development and application of codes of conduct and codes of practice pursuant to this Regulation, as well as of the Commission’s guidelines; (ii) the evaluation and review of this Regulation pursuant to Article 112, including as regards the serious incident reports referred to in Article 73, and the functioning of the EU database referred to in Article 71, the preparation of the delegated or implementing acts, and as regards possible alignments of this Regulation with the Union harmonisation legislation listed in Annex I; (iii) on technical specifications or existing standards regarding the requirements set out in Chapter III,Section 2; (iv) on the use of harmonised standards or common specifications referred to in Articles 40 and 41; (v) trends, such as European global competitiveness in AI, the uptake of AI in the Union, and the development of digital skills; (vi) trends on the evolving typology of AI value chains, in particular on the resulting implications in terms of accountability; (vii)on the potential need for amendment to Annex III in accordance with Article 7, and on the potential need for possible revision of Article 5 pursuant to Article 112, taking into account relevant available evidence and the latest developments in technology; (f) support the Commission in promoting AI literacy, public awareness and understanding of the benefits, risks, safeguards and rights and obligations in relation to the use of AI systems; (g)facilitate the development of common criteria and a shared understanding among market operators and competent authorities of the relevant concepts provided for in this Regulation, including by contributing to the development of benchmarks; (h)cooperate, as appropriate, with other Union institutions, bodies, offices and agencies, as well as relevant Union expert groups and networks, in particular in the fields of product safety, cybersecurity, competition, digital and media services, financial services, consumer protection, data and fundamental rights protection; (i) contribute to effective cooperation with the competent authorities of third countries and with international organisations; (j) assist national competent authorities and the Commission in developing the organisational and technical expertise required for the implementation of this Regulation, including by contributing to the assessment of training needs for staff of Member States involved in implementing this Regulation; (k)assist the AI Office in supporting national competent authorities in the establishment and development of AI regulatory sandboxes, and facilitate cooperation and information-sharing among AI regulatory sandboxes; (l) contribute to, and provide relevant advice on, the development of guidance documents; (m) advise the Commission in relation to international matters on AI; (n)provide opinions to the Commission on the qualified alerts regarding general-purpose AI models; (o)receive opinions by the Member States on qualified alerts regarding general-purpose AI models, and on national experiences and practices on the monitoring and enforcement of AI systems, in particular systems integrating the general-purpose AI models. |
第六十六条 人工智能委员会的职责 人工智能委员会应当向欧盟委员会和相关成员国提供建议和协助,以推动本法的一致和有效适用。为此,人工智能委员会特别可以采取以下措施: (a)促进负责实施本法的国家主管机关之间的协调,并与有关市场机关机构合作并在取得其同意后,支持本法第七十四条第11款项下的市场监管机构联合行动; (b)在成员国之间收集和分享技术和监管专业知识和最佳实践; (c)就本法的实施,特别是通用人工智能模型相关规则的落实提供建议; (d)协调成员国内部的行政管理活动,包括关于本法第四十六条所规定的符合性评定程序的限制、人工智能监管沙盒的运作以及第五十七条、第五十九条和第六十条所规定真实场景条件下的测试; (e)根据欧盟委员会的要求或主动发布与本法的实施及其一致和有效适用有关的事项的建议和书面意见,包括: (i)根据本法和欧盟委员会的指导方针制定和实施行为准则和行业守则; (ii)根据本法第一百一十二条规定对本法进行评估和审查,(评估和审查范围)包括本法第七十三条所规定的重大事件报告、第七十一条所规定的欧盟数据库的运作、规章条例或实施细则的制定,以及本法与附录一所列欧盟统一立法之间可能的一致性; (iii)关于第三章第二节所要求的技术规范或现有标准; (iv)第四十条和第四十一条提及的统一标准或通用规范的使用; (v)趋势,如欧洲在人工智能方面的全球竞争力、欧盟对人工智能的应用以及数字技能的发展情况; (vi)人工智能价值链类型演变的趋势,尤其是对问责制的影响; (vii)考虑现有的相关证据和技术的最新发展情况,根据本法第七条规定修订附录三的潜在必要性,以及根据第一百一十二条规定对第五条进行可能的修订的潜在需求; (f)支持欧盟委员会提高人工智能素养,提高公众对使用人工智能系统的益处、风险、保障措施以及权利和义务的认识和理解; (g)推动市场运营者和主管机关制定共同标准,并就本法规定的相关概念达成共识,包括为制定标准做出贡献; (h)视情况与其他欧盟机构、相关欧盟专家组和专家网络合作,特别是在产品安全、网络安全、竞争、数字和媒体服务、金融服务、消费者保护、数据和基本权利保护等领域合作; (i)推动与第三国主管机关和国际组织的有效合作; (j)协助国家主管机关和欧盟委员会发展实施本法所需的组织和技术专长,包括协助评估对参与实施本法的成员国工作人员进行培训需求; (k)协助人工智能办公室支持国家主管机关建立和发展人工智能监管沙盒,促进人工智能监管沙盒之间的合作和信息共享; (l)协助制定指导文件,并提供相关建议; (m)就人工智能相关国际事务向欧盟委员会提供建议; (n)就通用人工智能模型的合格警报向欧盟委员会提供意见; (o)接收成员国对通用人工智能模型合格警报,和对人工智能系统监测和执行的国家经验和实践的意见,特别是对集成通用人工智能模型的系统的意见。 |
Article 67 Advisory forum 1. An advisory forum shall be established to provide technical expertise and advise the Board and the Commission, and to contribute to their tasks under this Regulation. 2. The membership of the advisory forum shall represent a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society and academia. The membership of the advisory forum shall be balanced with regard to commercial and non-commercial interests and, within the category of commercial interests, with regard to SMEs and other undertakings. 3. The Commission shall appoint the members of the advisory forum, in accordance with the criteria set out in paragraph 2, from amongst stakeholders with recognised expertise in the field of AI. 4. The term of office of the members of the advisory forum shall be two years, which may be extended by up to no more than four years. 5. The Fundamental Rights Agency, ENISA, the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI) shall be permanent members of the advisory forum. 6. The advisory forum shall draw up its rules of procedure. It shall elect two co-chairs from among its members, in accordance with criteria set out in paragraph 2. The term of office of the co-chairs shall be two years, renewable once. 7. The advisory forum shall hold meetings at least twice a year. The advisory forum may invite experts and other stakeholders to its meetings. 8. The advisory forum may prepare opinions, recommendations and written contributions at the request of the Board or the Commission. 9. The advisory forum may establish standing or temporary sub-groups as appropriate for the purpose of examining specific questions related to the objectives of this Regulation. 10. The advisory forum shall prepare an annual report on its activities. That report shall be made publicly available. |
第六十七条 咨询论坛 1、应当设立一个咨询论坛,为欧盟理事会和人工智能委员会提供技术支持和建议,支持其履行本法规定的职责。 2、咨询论坛的成员应当从行业界、初创企业、中小企业、民间团体和学术界等利益相关方中均衡选择。咨询论坛的成员应当平衡商业和非商业利益,在商业利益类别中平衡中小企业和其他企业利益。 3、欧盟委员会应当根据本条第2款规定,从在人工智能领域具有公认专业知识的利益相关者中任命咨询论坛成员。 4、咨询论坛成员的任期为两年,最长可延长四年。 5、欧盟基本权利署(The Fundamental Rights Agency)、欧盟网络安全局(ENISA)、欧洲标准化委员会(CEN)、欧洲电工标准化委员会(CENELEC)和欧洲电信标准化协会(ETSI)应当作为咨询论坛的永久成员。 6、咨询论坛应当拟订议事规则。根据本条第2款规定从其成员中选出两名联席主席。联席主席的任期为两年,可连任一次。 7、咨询论坛每年至少召开两次会议。可邀请专家和其他利益相关方参加其会议。 8、咨询论坛可以根据人工智能委员会或欧盟委员会的要求编制意见、建议和书面材料。 9、咨询论坛可以酌情设立常设或临时小组,以审查与本法所规定目标有关的具体问题。 10、咨询论坛应当编写并向社会公众公开一份有关其活动的年度报告。 |
Article 68 Scientific panel of independent experts 1. The Commission shall, by means of an implementing act, make provisions on the establishment of a scientific panel of independent experts (the ‘scientific panel’) intended to support the enforcement activities under this Regulation. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2). 2. The scientific panel shall consist of experts selected by the Commission on the basis of up-to-date scientific or technical expertise in the field of AI necessary for the tasks set out in paragraph 3, and shall be able to demonstrate meeting all of the following conditions: (a) having particular expertise and competence and scientific or technical expertise in the field of AI; (b) independence from any provider of AI systems or general-purpose AImodels; (c)an ability to carry out activities diligently, accurately and objectively. The Commission, in consultation with the Board, shall determine the number of experts on the panel in accordance with the required needs and shall ensure fair gender and geographical representation. 3. The scientific panel shall advise and support the AI Office, in particular with regard to the following tasks: (a) supporting the implementation and enforcement of this Regulation as regards general-purpose AI models and systems, in particular by: (i) alerting the AI Office of possible systemic risks at Union level of general-purpose AI models, in accordance with Article 90; (ii)contributing to the development of tools and methodologies for evaluating capabilities of general-purpose AI models and systems, including through benchmarks; (iii) providing advice on the classification of general-purpose AI models with systemic risk; (iv) providing advice on the classification of various general-purpose AI models and systems; (v) contributing to the development of tools and templates; (b) supporting the work of market surveillance authorities, at their request; (c)supporting cross-border market surveillance activities as referred to inArticle74(11), without prejudice to the powers of market surveillance authorities; (d) supporting the AI Office in carrying out its duties in the context of the Union safeguard procedure pursuant to Article 81. 4. The experts on the scientific panel shall perform their tasks with impartiality and objectivity, and shall ensure the confidentiality of information and data obtained in carrying out their tasks and activities. They shall neither seek nor take instructions from anyone when exercising their tasks under paragraph 3. Each expert shall draw up a declaration of interests, which shall be made publicly available. The AI Office shall establish systems and procedures to actively manage and prevent potential conflicts of interest. 5. The implementing act referred to in paragraph 1 shall include provisions on the conditions, procedures and detailed arrangements for the scientific panel and its members to issue alerts, and to request the assistance of the AI Office for the performance of the tasks of the scientific panel. |
第六十八条 独立专家科学小组 1、欧盟委员会应当通过制定实施细则,就设立独立专家科学小组(简称“科学小组”)作出规定,旨在支持本法的实施。实施细则应当根据本法第九十八条第2款规定经审查程序审查通过。 2、欧盟委员会应当根据完成本条第3款项下职责所需的人工智能领域最先进的技术专长,选定专家组成科学小组,并应当能够证明其符合下列全部条件: (a)在人工智能领域拥有特定的专业知识和能力以及科学或技术专长; (b)独立于所有人工智能系统或通用人工智能模型提供者; (c)具备勤勉、准确、客观地开展活动的能力。 欧盟委员会应当与人工智能委员会协商,根据需求确定专家组的专家人数,并应确保(专家组)具有公平的性别和地域代表性。 3、科学小组应当向人工智能办公室提供咨询和支持,特别是在以下任务方面: (a)支持本法关于通用人工智能模型和系统相关规定的实施和执行,特别是通过以下方式: (i)根据本法第九十条规定,提醒人工智能办公室通用人工智能模型在欧盟层面可能面临的系统性风险; (ii)支持(包括通过基准)开发通用人工智能模型和系统能力评估工具和方法; (iii)就具有系统性风险的通用人工智能模型的分类提供建议; (iv)就各类通用人工智能模型和系统的分类提供建议; (v)协助开发工具和模板; (b)根据市场监管机构的要求支持其工作。 (c)在不影响市场监管机构权力的情况下,支持本法第七十四条第11款项下跨境市场监管活动。 (d)在本法第八十一条规定的欧盟保障程序中,支持人工智能办公室履行职责。 4、科学小组中的专家应当公正客观地履行职责,并应当确保对其在履行职责和活动中获得的信息和数据保密。他们在履行本条第3款项下职责时,不得寻求或接受任何人的指示。每位专家都应当起草并向公众公开一份利害关系声明。人工智能办公室应当建立规则和程序,积极管理和预防潜在的利益冲突。 5、本条第1款所述实施细则应当包括科学小组及其成员发出警报和请求人工智能办公室协助其履行科学小组职责相关的条件、程序和详细安排的规定。 |
Article 69 Access to the pool of experts by the Member States 1. Member States may call upon experts of the scientific panel to support their enforcement activities under this Regulation. 2. The Member States may be required to pay fees for the advice and support provided by the experts. The structure and the level of fees as well as the scale and structure of recoverable costs shall be set out in the implementing act referred to in Article 68(1), taking into account the objectives of the adequate implementation of this Regulation, cost-effectiveness and the necessity of ensuring effective access to experts for all Member States. 3. The Commission shall facilitate timely access to the experts by the Member States, as needed, and ensure that the combination of support activities carried out by Union AI testing support pursuant to Article 84 and experts pursuant to this Article is efficiently organised and provides the best possible added value. |
第六十九条 成员国获取专家库支持 1、各成员国可以呼吁科学小组的专家支持其根据本法开展的执法活动。 2、成员国可能需要为专家提供的咨询和支持付费。本法第六十八条第1款所述实施细则应当规定费用的结构和标准以及可收回成本的大小和结构,考虑充分落实本法的目标、成本效益和确保所有成员国有效获取专家支持的必要性。 3、欧盟委员会应当在成员国需要时为其及时联系专家提供便利,并确保有效地组织欧盟人工智能测试支持部门根据本法第八十四条规定、专家们根据本条规定开展支持活动,并提供最大附加值。 |
SECTION 2 National competent authorities |
第二节 国家主管机关 |
Article 70 Designation of national competent authorities and single points of contact 1. Each Member State shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority for the purposes of this Regulation. Those national competent authorities shall exercise their powers independently, impartially and without bias so as to safeguard the objectivity of their activities and tasks, and to ensure the application and implementation of this Regulation. The members of those authorities shall refrain from any action incompatible with their duties. Provided that those principles are observed, such activities and tasks may be performed by one or more designated authorities, in accordance with the organisational needs of the Member State. 2. Member States shall communicate to the Commission the identity of the notifying authorities and the market surveillance authorities and the tasks of those authorities, as well as any subsequent changes thereto. Member States shall make publicly available information on how competent authorities and single points of contact can be contacted, through electronic communication means by 2 August 2025. Member States shall designate a market surveillance authority to act as the single point of contact for this Regulation, and shall notify the Commission of the identity of the single point of contact. The Commission shall make a list of the single points of contact publicly available. 3. Member States shall ensure that their national competent authorities are provided with adequate technical, financial and human resources, and with infrastructure to fulfil their tasks effectively under this Regulation. In particular, the national competent authorities shall have a sufficient number of personnel permanently available whose competences and expertise shall include an in-depth understanding of AI technologies, data and data computing, personal data protection, cybersecurity, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements. Member States shall assess and, if necessary, update competence and resource requirements referred to in this paragraph on an annual basis. 4. National competent authorities shall take appropriate measures to ensure an adequate level of cybersecurity. 5. When performing their tasks, the national competent authorities shall act in accordance with the confidentiality obligations set out in Article 78. 6. By 2 August 2025, and once every two years thereafter, Member States shall report to the Commission on the status of the financial and human resources of the national competent authorities, with an assessment of their adequacy. The Commission shall transmit that information to the Board for discussion and possible recommendations. 7. The Commission shall facilitate the exchange of experience between national competent authorities. 8. National competent authorities may provide guidance and advice on the implementation of this Regulation, in particular to SMEs including start-ups, taking into account the guidance and advice of the Board and the Commission, as appropriate. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union law, the national competent authorities under that Union law shall be consulted, as appropriate. 9. Where Union institutions, bodies, offices or agencies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as the competent authority for their supervision. |
第七十条 指定国家主管机关和单一联络站 1、 为本法之目的,每个成员国都应当建立或指定至少一个通报机构和一个市场监管机构作为国家主管机关。该等国家主管机关应当独立、公正和无偏见地行使权力,确保其活动和职责的客观性,并确保本法的适用和实施。该等机构的成员应当避免采取任何违背其职责的行动。在遵守上述原则的情况下,该等活动和职责可以按成员国的组织需求由一个或多个指定机构执行。 2、 成员国应当将通报机构和市场监管机构的身份、职责及其任何变化告知欧盟委员会。成员国应当在2025年8月2日以前通过电子通信方式公开其主管机关和单一联络站的联系方式信息。成员国应当指定一个市场监管机构作为实施本法的单一联络站,并应当将其单一联络站的身份通知欧盟委员会。欧盟委员会应当公布一份(各成员国的)单一联络站名单。 3、 成员国应当确保其(指定的)国家主管机关获得足够的技术、财政和人力资源以及基础设施,以保证其有效地履行本法规定的职责。尤其应当确保国家主管机关拥有足够数量的长期可用人员,该等人员应当对包括人工智能技术、数据和数据计算、个人数据保护、网络安全、基本权利、健康和安全风险等专业知识有深入的了解,并对现有标准和法律要求有深入了解。成员国应当每年对本款所述的能力和资源要求进行评估并在必要时更新。 4、 国家主管机关应当采取适当措施,确保达到充分的网络安全水平。 5、 国家主管机关在履行职责时,应当遵守本法第七十八条规定的保密义务。 6、 到2025年8月2日及此后每两年,成员国均应当向欧盟委员会报告其国家主管机关的财政和人力资源状况,并评估其充足性。欧盟委员会应当将该等信息转交人工智能委员会供讨论并提出可能的建议。 7、 欧盟委员会应当推动国家主管机关之间的经验交流。 8、 国家主管机关在适当考虑人工智能委员会和欧盟委员会的指导和建议的同时,可以就本法的实施提供指导和建议,特别是向包括初创企业在内的中小企业提供指导和建议。国家主管机关计划在其他欧盟法律涵盖的领域就人工智能系统提供指导和建议时,应当适当咨询该欧盟法项下的国家主管机关。 9、 欧盟各机构作为本法调整对象时,欧洲数据保护监督员应当作为负责监管的主管机关。 |
CHAPTER VIII EU DATABASE FOR HIGH-RISK AI SYSTEMS |
第八章 欧盟高风险人工智能系统数据库 |
Article 71 EU database for high-risk AI systems listed in Annex III 1. The Commission shall, in collaboration with the Member States, set up and maintain an EU database containing information referred to in paragraphs 2 and 3 of this Article concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Articles 49 and 60 and AI systems that are not considered as high-risk pursuant to Article 6(3) and which are registered in accordance with Article 6(4) and Article 49. When setting the functional specifications of such database, the Commission shall consult the relevant experts, and when updating the functional specifications of such database, the Commission shall consult the Board. 2. The data listed in Sections A and B of Annex VIII shall be entered into the EU database by the provider or, where applicable, by the authorised representative. 3. The data listed in Section C of Annex VIII shall be entered into the EU database by the deployer who is, or who acts on behalf of, a public authority, agency or body, in accordance with Article 49(3) and (4). 4. With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public. 5. The EU database shall contain personal data only in so far as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the deployer, as applicable. 6. The Commission shall be the controller of the EU database. It shall make available to providers, prospective providers and deployers adequate technical and administrative support. The EU database shall comply with the applicable accessibility requirements. |
第七十一条 本法附录三所列欧盟高风险人工智能系统数据库 1、 欧盟委员会应当与各成员国合作,建立并维护一个欧盟数据库,其中包含按本法第四十九条和第六十条登记的第六条第2款项下高风险人工智能系统,以及按第六条第3款规定不视为高风险但按第六第4款和第四十九条规定登记的人工智能系统相关的本条第2款和第3款所述信息。欧盟委员会在制定上述数据库的功能规范时,应当征求相关专家的意见,在更新此类数据库的操作规范时,应当征询人工智能委员会的意见。 2、 本法附录八第A条和第B条所列数据应当由提供者或授权代表(如适用)输入欧盟数据库。 3、 根据本法第四十九条第3款和第4款规定,本法附录八第C条所列数据应当由公权力机关、机构或团体部署者或作为上述机构之代表的部署者输入到欧盟数据库。 4、 除本法第四十九条第4款和第六十条第4款(c)项所述部分外,根据本法第四十九条规定登记到欧盟数据库中的信息应当以简单明了的方式可获取和向公众公开。上述信息应当易浏览且可机读。根据本法第六十条规定登记的信息只能由市场监管机构和欧盟委员会获取,但提供者或潜在提供者也同意向公众提供上述信息的除外。 5、 欧盟数据库仅在根据本法规定收集和处理信息所必需的范围内包含个人数据。(所收集和处理的)该等信息应当包括负责注册该系统并有权代表提供者或部署者的自然人的姓名和联系方式(如适用)。 6、 欧盟委员会是欧盟数据库的控制者。应当向提供者、潜在提供者和部署者提供足够的技术和管理支持。欧盟数据库应当符合所适用的无障碍访问要求。 |