本期内容为欧盟《人工智能法案》中英对照版,法案英文全文9万余字、中文译文约6万字,囿于公号篇幅限制,分四期发出。本期为第1-33条。
目录 (原文没有目录) |
|
CHAPTER IGENERAL PROVISIONS |
第一章 一般条款 |
CHAPTER II PROHIBITED AI PRACTICES |
第二章 禁止性人工智能活动 |
CHAPTER III HIGH-RISK AI SYSTEMS SECTION 1 Classification of AI systems as high-risk SECTION 2 Requirements for high-risk AI systems SECTION 3 Obligations of providers and deployers of high-risk AI systems and other parties SECTION 4 Notifying authorities and notified bodies SECTION 5 Standards, conformity assessment, certificates, registration |
第三章 高风险人工智能系统 第一节 高风险人工智能系统的分类 第二节 对高风险人工智能系统的要求 第三节 高风险人工智能系统提供者、部署者和其他主体的义务 第四节 通报机构和评定机构 第五节 标准、符合性评估、证书和登记 |
CHAPTER IV TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMS |
第四章 特定人工智能系统的提供者和部署者的透明度义务 |
CHAPTER V GENERAL-PURPOSE AI MODELS SECTION 1 Classification rules SECTION 2 Obligations for providers of general-purpose AI models SECTION 3 Obligations of providers of general-purpose AI models with systemic risk SECTION 4 Codes of practice |
第五章 通用人工智能模型 第一节 分类规则 第二节 通用人工智能模型提供者的义务 第三节 具有系统性风险的通用人工智能模型的提供者的义务 第四节 业务守则 |
CHAPTER VI MEASURES IN SUPPORT OF INNOVATION |
第六章 创新支持措施 |
CHAPTER VII GOVERNANCE SECTION 1 Governance at Union level SECTION 2 National competent authorities |
第七章 治理 第一节 欧盟层面的治理 第二节 国家主管机关 |
CHAPTER VIII EU DATABASE FOR HIGH-RISK AI SYSTEMS |
第八章 欧盟高风险人工智能系统数据库 |
CHAPTER IX POST-MARKET MONITORING, INFORMATION SHARING AND MARKET SURVEILLANCE SECTION 1 Post-market monitoring SECTION 2 Sharing of information on serious incidents SECTION 3 Enforcement SECTION 4 Remedies |
第九章 上市后监测、信息共享和市场监管 第一节 上市后监测 第二节 重大事件信息共享 第三节 实施 第五节 救济 |
CHAPTER X CODES OF CONDUCT AND GUIDELINES |
第十章 业务守则和指导方针 |
CHAPTER XI DELEGATION OF POWER AND COMMITTEE PROCEDURE |
第十一章 行政授权和欧盟委员会程序 |
CHAPTER XII PENALTIES |
第十二章 罚则 |
CHAPTER XIII FINAL PROVISIONS |
第十三章 最后条款 |
《欧盟人工智能法案》
CHAPTER I GENERAL PROVISIONS |
|
Article 1 Subject matter`
(a) harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union; (b) prohibitions of certain AI practices; (c) specific requirements for high-risk AI systems and obligations for operators of such systems; (d) harmonised transparency rules for certain AI systems; (e) harmonised rules for the placing on the market of general-purpose AI models; (f)rules on market monitoring, market surveillance, governance and enforcement; (g) measures to support innovation, with a particular focus on SMEs, including start-ups. |
第一条 主体事项 1、本法的目的在于改善内部市场的功能,促进以人为本和值得信赖的人工智能(AI)的应用;同时,确保对《欧盟基本权利宪章》所规定的健康、安全、基本权利(包括民主、法治和环境保护)得到高水平保护,使其免受欧盟人工智能系统的有害影响,并支持创新。 2、本法规定如下内容: (a)统一欧盟人工智能系统的上市、投放和使用规则; (b)禁止特定人工智能活动; (c)对高风险人工智能系统的具体要求以及此类系统操作主体的义务; (d)特定人工智能系统的统一透明度规则; (e)通用人工智能模型投放市场的统一规则; (f)市场监测、市场监控、治理和执法规则; (g)创新支持措施,特别是对中小企业,包括初创企业。 |
Article 2 Scope 1. This Regulation applies to: (a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country; (b) deployers of AI systems that have their place of establishment or are located within the Union; (c) providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union; (d) importers and distributors of AI systems; (e) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark; (f) authorised representatives of providers, which are not established in the Union; (g) affected persons that are located in the Union. 2. For AI systems classified as high-risk AI systems in accordance with Article 6(1) related to products covered by the Union harmonisation legislation listed in Section B of Annex I, only Article 6(1), Articles 102 to 109 and Article 112 apply. Article 57 applies only in so far as the requirements for high-risk AI systems under this Regulation have been integrated in that Union harmonisation legislation. 3. This Regulation does not apply to areas outside the scope of Union law, and shall not, in any event, affect the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States with carrying out tasks in relation to those competences. This Regulation does not apply to AI systems where and in so far they are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities. This Regulation does not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities. 4. This Regulation applies neither to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States, provided that such a third country or international organisation provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals. 5. This Regulation shall not affect the application of the provisions on the liability of providers of intermediary services as set out in Chapter II of Regulation (EU) 2022/2065. 6. This Regulation does not apply to AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development. 7. Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processed in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect Regulation (EU) 2016/679 or (EU) 2018/1725, or Directive 2002/58/EC or (EU) 2016/680, without prejudice to Article 10(5) and Article 59 of this Regulation. 8. This Regulation does not apply to any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service. Such activities shall be conducted in accordance with applicable Union law. Testing in real world conditions shall not be covered by that exclusion. 9. This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety. 10. This Regulation does not apply to obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity. 11. This Regulation does not preclude the Union or Member States from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or from encouraging or allowing the application of collective agreements which are more favourable to workers. 12. This Regulation does not apply to AI systems released under free and open-source licences, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50. |
第二条 适用范围 1、本法适用范围如下: (a)在欧盟市场上市或投放人工智能系统,或者上市通用人工智能模型的提供者,无论此类提供者是在欧盟境内还是在第三国成立或设立经营场所; (b)在欧盟境内成立或有经营场所的人工智能系统的部署者; (c)系统输出被用于欧盟境内的人工智能系统的提供者和部署者,尽管其成立地点或办公场所位于第三国,仍受本法调整; (d)人工智能系统的进口者和分销者; (e)以自己的名义或商标将人工智能系统与其产品一起投放市场或投入使用的产品制造者; (f)不在欧盟成立的提供者的授权代表; (g)在欧盟境内的受影响主体。 2、针对按本法第六条第1款规定被界定为高风险人工智能系统的,与附录一第二条所载欧盟统一立法清单涵盖的产品有关的人工智能系统,仅适用本法第六条第1款、第一百零二条至第一百零九条和第一百一十二条。仅在本法关于高风险人工智能系统的要求已被纳入欧盟统一立法的情况下,本法第五十七条方可适用。 3、本法不适用于欧盟司法管辖范围以外的区域;且在任何情况下,无论成员国委托哪类主体执行与上述权限有关的任务,本法的适用均不得影响欧盟各成员国在国家安全方面的权限。 本法不适用于仅为军事、国防或国家安全目的而被投放、投用、经修改或未经修改被使用的人工智能系统,无论从事上述活动的主体为何种类型。 本法不适用于未在欧盟市场投放或投入使用,但其输出被用于欧盟的人工智能系统,只要这些系统输出在欧盟仅被用于军事、国防或国家安全目的,无论从事这些活动的主体是何种类型。 4、如果第三国公权力机关、本条第1款规定范围内的国际组织是在与欧盟、欧盟某个或多个成员国签订的国际合作或执法司法合作协议的框架内使用人工智能系统,且该第三国或国际组织在保护个人基本权利和自由方面已经提供充足的保障措施,其上述人工智能系统使用行为不适用本法规定。 5、本法不得影响欧洲议会和欧盟理事会第2022/2065号《关于数字服务单一市场的条例》第二章中关于中介服务提供者责任规定的适用。 6、本法不适用于仅为科学研究和开发目的而特别开发和投入使用的人工智能系统、人工智能模型,包括其输出。 7、欧盟关于保护个人数据、隐私和通信秘密的法律,适用于按本法规定的权利和义务处理的个人数据。在不违反本法第十条第5款和第五十九条的情况下,本法不得影响欧洲议会和欧盟理事会第2016/679号《关于在处理个人数据方面保护自然人和此类数据自由流动的》、第2018/1725号《关于在欧盟机构、团体、办事处和机关处理个人数据时保护自然人以及关于此类数据自由流动的条例》,或者第2002/58/EC号《关于电子通信领域的个人数据处理和隐私保护的指令(隐私和电子通信指令)》、第2016/680号《关于在主管当局为预防、调查、发现或提起公诉或采取刑事处罚措施而处理个人数据时保护自然人以及关于此类数据自由流动的条例》的实施。 8、本法不适用于投放到市场或投入使用前的人工智能系统或人工智能模型的任何研究、测试或开发活动。此类活动应当遵守所适用的欧盟法律。但在真实场景中的测试除外。 9、本法不影响与消费者保护和产品安全有关的其他欧盟法律的规定。 10、本法规定的义务不适用于在纯粹个人的非专业活动中使用人工智能系统的自然人部署者。 11、本法并不妨碍欧盟或各成员国维持或引入对工人更有利的法律、法规或行政规章,以保护他们在雇主使用人工智能系统时的权利,或者鼓励或允许适用对工人更为有利的集体协议。 12、本法不适用于根据自由和开源许可证发布的人工智能系统,但它们被认定为高风险人工智能系统或属于本法第五条或第五十条规定的人工智能系统被投放到市场或投入使用的除外。 |
Article 3 Definitions For the purposes of this Regulation, the following definitions apply: (1) ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; (2) ‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm; (3) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge; (4) ‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity; (5) ‘authorised representative’ means a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation; (6) ‘importer’ means a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country; (7) ‘distributor’ means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market; (8) ‘operator’ means a provider, product manufacturer, deployer, authorised representative, importer or distributor; (9) ‘placing on the market’ means the first making available of an AI system or a general-purpose AI model on the Union market; (10) ‘making available on the market’ means the supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge; (11) ‘putting into service’ means the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose; (12) ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation; (13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems; (14) ‘safety component’ means a component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property; (15) ‘instructions for use’ means the information provided by the provider to inform the deployer of, in particular, an AI system’s intended purpose and proper use; (16) ‘recall of an AI system’ means any measure aiming to achieve the return to the provider or taking out of service or disabling the use of an AI system made available to deployers; (17) ‘withdrawal of an AI system’ means any measure aiming to prevent an AI system in the supply chain being made available on the market; (18) ‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose; (19) ‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring; (20) ‘conformity assessment’ means the process of demonstrating whether the requirements set out in Chapter III, Section 2 relating to a high-risk AI system have been fulfilled; (21) ‘conformity assessment body’ means a body that performs third-party conformity assessment activities, including testing, certification and inspection; (22) ‘notified body’ means a conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation; (23) ‘substantial modification’ means a change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed; (24) ‘CE marking’ means a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Chapter III, Section 2 and other applicable Union harmonisation legislation providing for its affixing; (25) ‘post-market monitoring system’ means all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions; (26) ‘market surveillance authority’ means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020; (27)‘harmonised standard’ means a harmonised standard as defined in Article 2(1), point (c), of Regulation (EU) No 1025/2012; (28)‘common specification’ means a set of technical specifications as defined in Article 2, point (4) of Regulation (EU) No 1025/2012, providing means to comply with certain requirements established under this Regulation; (29) ‘training data’ means data used for training an AI system through fitting its learnable parameters; (30)‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process in order, inter alia, to prevent underfitting or overfitting; (31) ‘validation data set’ means a separate data set or part of the training data set, either as a fixed or variable split; (32) ‘testing data’ means data used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service; (33) ‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an output; (34) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data; (35) ‘biometric identification’ means the automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database; (36) ‘biometric verification’ means the automated, one-to-one verification, including authentication, of the identity of natural persons by comparing their biometric data to previously provided biometric data; (37) ‘special categories of personal data’ means the categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725; (38) ‘sensitive operational data’ means operational data related to activities of prevention, detection, investigation or prosecution of criminal offences, the disclosure of which could jeopardise the integrity of criminal proceedings; (39) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data; (40) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly necessary for objective technical reasons; (41) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database; (42) ‘real-time remote biometric identification system’ means a remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention; (43) ‘post-remote biometric identification system’ means a remote biometric identification system other than a real-time remote biometric identification system; (44) ‘publicly accessible space’ means any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions; (45) ‘law enforcement authority’ means: (a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or (b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; (46) ‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security; (47) ‘AI Office’ means the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January 2024; references in this Regulation to the AI Office shall be construed as references to the Commission; (48) ‘national competent authority’ means a notifying authority or a market surveillance authority; as regards AI systems put into service or used by Union institutions, agencies, offices and bodies, references to national competent authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data Protection Supervisor; (49) ‘serious incident’ means an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following: (a) the death of a person, or serious harm to a person’s health; (b) a serious and irreversible disruption of the management or operation of critical infrastructure; (c) the infringement of obligations under Union law intended to protect fundamental rights; (d) serious harm to property or the environment; (50) ‘personal data’ means personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679; (51) ‘non-personal data’ means data other than personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679; (52) ‘profiling’ means profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679; (53) ‘real-world testing plan’ means a document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real-world conditions; (54) ‘sandbox plan’ means a document agreed between the participating provider and the competent authority describing the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox; (55) ‘AI regulatory sandbox’ means a controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision; (56) ‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause; (57) ‘testing in real-world conditions’ means the temporary testing of an AI system for its intended purpose in real-world conditions outside a laboratory or otherwise simulated environment, with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of this Regulation and it does not qualify as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all the conditions laid down in Article 57 or 60 are fulfilled; (58) ‘subject’, for the purpose of real-world testing, means a natural person who participates in testing in real-world conditions; (59) ‘informed consent’ means a subject’s freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real-world conditions, after having been informed of all aspects of the testing that are relevant to the subject’s decision to participate; (60) ‘deep fake’ means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful; (61) ‘widespread infringement’ means any act or omission contrary to Union law protecting the interest of individuals, which: (a) has harmed or is likely to harm the collective interests of individuals residing in at least two Member States other than the Member State in which: (i) the act or omission originated or took place; (ii) the provider concerned, or, where applicable, its authorised representative is located or established; or (iii) the deployer is established, when the infringement is committed by the deployer; (b) has caused, causes or is likely to cause harm to the collective interests of individuals and has common features, including the same unlawful practice or the same interest being infringed, and is occurring concurrently, committed by the same operator, in at least three Member States; (62) ‘critical infrastructure’ means critical infrastructure as defined in Article 2, point (4), of Directive (EU) 2022/2557; (63) ‘general-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market; (64) ‘high-impact capabilities’ means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models; (65) ‘systemic risk’ means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain; (66) ‘general-purpose AI system’ means an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems; (67) ‘floating-point operation’ means any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base; (68) ‘downstream provider’ means a provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations. |
第三条 定义 本法所使用下列概念有如下含义: (1)“人工智能系统”,指一种以不同程度的自主性运行为设计目的、在部署后可能表现出适应性并有明确或隐含的目标,从其收到的输入中推断出如何生成可能影响物理环境或虚拟环境的预测、内容、建议或决策等输出的机器系统; (2)“风险”,指发生损害的概率和损害严重程度的组合; (3)“提供者”,指开发人工智能系统或通用人工智能模型,或者以自身名义或自有品牌开发人工智能系统或者通用人工智能模型并将其有偿或无偿地投放到市场或者投入使用的自然人或法人、公权力机关、机构或其他主体; (4)“部署者”,指经授权使用人工智能系统的自然人或法人、公权力机关、机构或其他主体,但在个人非专业活动过程中使用人工智能系统是的除外; (5)“授权代表”,指注册地或经营场所位于欧盟境内、接受人工智能系统或通用人工智能模型提供者的书面授权,分别代表其履行和执行本法规定的义务和程序的自然人或法人; (6)“进口方”,指注册地或经营场所在欧盟境内、在市场上销售带有第三国自然人或法人名称或商标的人工智能系统的自然人或法人; (7)“分销方”,指处于供应链中,在欧盟市场中供应人工智能系统的、提供者或进口方以外的自然人或法人; (8)“运营方”,指提供者、产品制造者、部署者、授权代表、进口方或分销方; (9)“投放到市场”,指首次在欧盟市场上供应人工智能系统或通用人工智能模型; (10)“在市场上提供”,指以在欧盟市场上分销或使用为目的,在商业活动中有偿或无偿提供人工智能系统或通用人工智能模型; (11)“投入使用”,指直接向部署者提供人工智能系统供其首次直接使用,或供其在欧盟境内自用于预期目的; (12) “预期目的”,指提供者在使用说明、宣传或销售材料和声明以及技术文件所提供信息中限定的人工智能系统的预期用途,包括具体的使用场景和条件; (13)“合理可预见的不当使用”,指可能因合理可预见的人类行为或与其他系统(包括其他人工智能系统)的交互导致以不符合预期目的方式使用人工智能系统; (14)“安全组件”,指实现产品或人工智能系统的安全功能,或者其故障或失灵将危及生命或财产的健康和安全的组件; (15)“使用说明”,指提供者提供的信息,特别是告知部署者人工智能系统的预期目的和正确使用方法; (16)“召回人工智能系统”,指为向人工智能系统的提供者返还或停止服务或禁用部署者可用的人工智能系统而采取的任何措施; (17)“撤回人工智能系统”,指为防止供应链中的人工智能系统被投放到市场而采取的任何措施; (18)“人工智能系统的性能”,指人工智能系统实现其预期目的的能力; (19)“通报机构”,指负责制定和实施符合性评定机构的评估、指定、通知和监督必要程序的国家机关; (20)“符合性评定”,指证明是否已满足本法第三章第2节中有关高风险人工智能系统的要求的过程; (21)“符合性评定机构”,指开展测试、认证和检验等第三方符合性评定活动的机构; (22)“评定机构”,指根据本法和其他相关欧盟统一立法规定通报的符合性评定机构; (23)“实质性修改”,指人工智能系统被投放到市场或投入使用后发生的变化,该等变化是系统提供者在初步符合性评定中所无法预见或计划的,因此该系统遵守本法第三章第2节规定的要求受到影响或导致该系统经评定的预期目的发生改变; (24)“CE标识”,指提供者表明其供应的人工智能系统符合本法第三章第2节和其他适用的欧盟统一立法规定要求的标识; (25)“上市后监测系统”,指人工智能系统提供者为收集和检查其投放到市场或投入使用的人工智能系统的使用体验,以确认其是否需要立即采取任何必要的纠正或预防措施而开展的所有活动; (26)“市场监管机构”,指根据欧洲议会和欧盟理事会第2019/1020号《关于产品市场监管和合规,以及修订第2004/42/EC号指令、第765/2008号和第305/2011号条例的条例》; (27)“统一标准”,指本法第二条第1款c项、欧洲议会和欧盟理事会第1025/2012号《关于欧洲标准化,以及修订第89/686/EEC号、第93/15/EEC号、第94/9/EC号、第94/25/EC号、第95/16/EC号、第97/23/EC号、第98/34/EC号、第2004/22/EC号、第2007/23/EC号、第2009/23/EC号和第2009/105/EC号指令,废除欧洲议会和欧盟理事会与欧洲经济区相关的第87/95/EEC号和第1673/2006/EC号决定的条例》所定义的统一标准; (28)“通用规范”,指本法第二条第4款、欧洲议会和欧盟理事会第1025/2012号《关于欧洲标准化,以及修订第89/686/EEC号、第93/15/EEC号、第94/9/EC号、第94/25/EC号、第95/16/EC号、第97/23/EC号、第98/34/EC号、第2004/22/EC号、第2007/23/EC号、第2009/23/EC号和第2009/105/EC号指令,废除欧洲议会和欧盟理事会与欧洲经济区相关的第87/95/EEC号和第1673/2006/EC号决定的条例》定义的一系列技术规范,为符合本法规定的特定要求提供方法; (29)“训练数据”,指通过拟合人工智能系统的可学习参数来训练人工智能系统的数据; (30)“验证数据”,指用于对已训练的人工智能系统进行评估,并调整其不可学习的参数及其学习过程以防止拟合不足或拟合过渡的数据; (31)“验证数据集”,指单独数据集或训练数据集的组成部分,可以是固定的或可变的分割; (32)“测试数据”,指用于对人工智能系统进行独立评估,以便在该系统投放到市场或投入使用之前确认其预期性能的数据; (33)“输入数据”,指提供给人工智能系统或由人工智能系统直接获取,并作为系统产生输出的依据的数据; (34)“生物特征数据”,指与自然人的身体、生理或行为特征相关的,经过特定技术处理所产生的个人数据,如面部图像或指纹数据; (35)“生物特征识别”,指通过将自然人的生物特征数据与数据库中存储的人类生物特征数据进行比较,自动识别该自然人的身体、生理、行为或心理特征以确定自然人身份的活动; (36)“生物特征验证”,指通过将自然人的生物特征数据与此前提供的生物特征信息进行比对,对该自然人的身份进行自动一对一验证的活动,包括身份验证; (37)“特殊类别的个人数据”,指欧洲议会和欧盟理事会第2016/679号《关于在处理个人数据方面保护自然人和此类数据自由流动,并废除第95/46/EC号指令(一般数据保护条例)的条例》第9条第1款、第2016/680号《关于在主管当局为预防、调查、发现或提起公诉或采取刑事处罚措施而处理个人数据时保护自然人和此类数据自由流动,并废除欧盟理事会第2008/977/JHA号框架决定的指令》第10条,和第2018/1725号《关于在欧盟机构、团体、办事处和机关处理个人数据时保护自然人以及此类数据自由流动,并废除第45/2001号条例和第1247/2002/EC号决定的条例》第10条第1款关于个人数据类别的规定; (38)“敏感业务数据”,指与预防、侦查、调查或提起公诉活动有关的、被披露可能会危及刑事诉讼完整性的业务数据; (39)“情绪识别系统”,指一种用于按自然人的生物特征数据识别或推断其情绪或意图的人工智能系统; (40)“生物识别分类系统”,指一种用于按自然人的生物识别数据将其分为特定类别的人工智能系统,除非它是另一种商业服务的客观技术上绝对必要的辅助工具; (41)“远程生物特征识别系统”,指一种用于在没有自然人积极参与的情况下,通过将一个人的生物特征数据与参考数据库中包含的生物特征信息进行比较,在一定距离内识别自然人; (42)“实时远程生物特征识别系统”,指一种生物特征数据的捕获、比较和识别都在没有明显延迟的情况下进行的远程生物特征身份识别系统,不仅包括即时识别,也包括为避免规避而发生的有限短时延迟; (43)“非实时远程生物识别系统”,指除实时远程生物识别系统之外的远程生物识别技术; (44)“公共可进入空间”,指不特定自然人可进入的任何公共或私有物理场所,无论是否附带特定进入条件,也无论其是否有潜在的容量限制; (45)“执法机关”是指: (a)负责预防、调查、侦查或提起公诉或采取刑事处罚措施的公权力机关,包括预防和抵御对公共安全的威胁;或 (b)经成员国法律授权行使公权力的任何其他机构或实体,为预防、调查、发现或提起公诉或采取刑事处罚措施,包括预防和抵御对公共安全的威胁; (46)“执法”,指执法机关或其代表为预防、调查、侦查或提起公诉或采取刑事处罚措施而开展的活动,包括防范和抵御对公共安全的威胁; (47)“人工智能办公室”,指欧盟委员会2024年1月24日决定所规定的该会促进人工智能系统和通用人工智能模型的实施、监测和监督以及人工智能治理的职能。本法所称人工智能办公室应解释为欧盟委员会; (48)“国家主管机关”,指通报机构或市场监管机构。至于欧盟各机构、机关、办公室等主体投入使用或使用的人工智能系统,本法所称国家主管机关或市场监管机构应解释为欧洲数据保护监督员。 (49)“重大事件”,指直接或间接造成以下任一后果的人工智能系统事件或故障: (a)一人死亡或重伤; (b)关键基础设施的管理或运营遭受不可逆转的严重干扰; (c)违反欧盟法律旨在保护基本权利的义务; (d)对财产或环境造成严重损害。 (50)“个人数据”,指欧洲议会和欧盟理事会第2024/1689号《关于在处理个人数据方面保护自然人和此类数据自由流动的条例》第4条第(1)款定义的个人数据; (51)“非个人数据”,指除欧洲议会和欧盟理事会第2024/1689号《关于在处理个人数据方面保护自然人和此类数据自由流动的条例》第4条第(1)款定义的个人数据以外的数据; (52)“特征分析”,指欧洲议会和欧盟理事会第2024/1689号《关于在处理个人数据方面保护自然人和此类数据自由流动的条例》第4条第(4)款定义的特征分析; (53)“真实环境测试计划”,指描述真实环境条件下测试的目标、方法、地理、人口和时间范围、监测、组织和实施的文件; (54)“沙盒计划”,指相关提供者与主管机关之间达成的,描述在沙盒内开展特定活动的目标、条件、时间表、方法和要求的文件; (55)“人工智能监管沙盒”,指由主管机关建立的,为人工智能系统的提供者或潜在提供者提供在真实环境中按沙盒计划在有限的时间内在监管监督之下开发、培训、验证和测试创新人工智能系统(如适用)的可能性; (56)“人工智能素养”,指使提供者、部署者和受影响者能够在考虑本法规定的各方权利义务的情况下,对人工智能系统进行知情部署,并了解人工智能的机遇和风险及其可能带来的危害的技能、知识和认知; (57)“真实环境条件测试”,指在满足本法第57条或第60条所规定全部条件的前提下,在实验室或其他模拟环境之外的现实环境中,对人工智能系统进行临时测试以收集可靠和稳健的数据、评估和验证人工智能系统是否符合本法规定,并且根据本法规定不符合将该人工智能系统投放到市场或投入使用的条件; (58)“受试者”,就真实环境测试而言,指在真实环境条件下参与测试的自然人; (59) “知情同意”,指受试者在获悉与其参与特定真实场景条件测试决定有关的所有方面情况后,自由、明确、具体且自愿地表达其参与该真实场景条件测试的意思表示; (60)“深度伪造”,指人工智能生成或控制的,看起来像真实的人、物、地点、实体或事件并且可能会被人误认为是真迹或真相的图像、音频或视频内容; (61)“大规模侵权”,指违反欧盟关于保护个人利益的法律且符合下列条件的行为或不作为: (a) 损害或可能损害居住在除如下成员国以外地区的两个或两个以上成员国个体的集体利益: (i)行为或过错的起源或发生地; (ii)相关提供者或其授权代表(如适用)的住所地或经常居住地;或 (iii)当部署者实施侵权行为时,部署者的住所地; (b)造成或可能造成个体的集体利益遭受损害,并且具有共同特征,包括同一经营者在三个或三个以上成员国同时实施相同的违法或侵犯相同利益的行为; (62)“关键基础设施”,指欧洲议会和欧盟理事会第2022/2557号《关于关键实体弹性,及废除第2008/114/EC号指令的指令》第2条第(4)款定义的关键基础设施; (63)“通用人工智能模型”,指一种利用大量数据、运用大规模自我监督训练方式训练,呈现显著的通用性、能够胜任各种不同的任务,并且无论以何种方式投放到市场都可以集成到各种下游系统或应用程序中的人工智能模型。但投放到市场前用于研究、开发或原型制作活动的人工智能模型除外; (64)“高影响力”,指匹配或超过最先进的通用人工智能模型所记录的能力的能力; (65)“系统性风险”,指通用人工智能模型特有的、具有高影响能力的风险,由于其影响范围或因其对公共卫生、安全、社会保障、基本权利或整个社会的实际或合理可预见的负面影响,对欧盟市场产生重大影响,这些风险可以在整个价值链中大规模传播; (66)“通用人工智能系统”,指基于通用人工智能模型的人工智能系统,能够服务于各类目的,既可以直接使用,也可以集成到其他人工智能系统中; (67)“浮点运算”,指任何涉及浮点数的数学运算或任务,浮点数是实数的子集,通常在计算机上由固定精度并且由固定基数的整数指数缩放的整数表示; (68)“下游提供者”,指集成人工智能模型的通用人工智能系统等人工智能系统的提供者,无论该人工智能模型是由他们自己直接供应并垂直集成还是由另一个缔约方供应。 |
Article 4 AI literacy Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used. |
第四条 人工智能素养 人工智能系统的提供者和部署者应当采取措施,最大程度确保其员工和其他代表其操作和使用人工智能系统的人员具有足够的人工智能素养,考虑到他们的技术知识、经验、教育背景和培训情况以及人工智能系统的使用环境,兼顾人工智能系统所面向的个人或群体。 |
CHAPTER II PROHIBITED AI PRACTICES |
|
Article 5 Prohibited AI practices
(a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm; (b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm; (c) the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity; (d) the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity; (e) the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; (f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons; (g) the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement; (h) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; (iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. Point (h) of the first subparagraph is without prejudice to Article 9 of Regulation (EU) 2016/679 for the processing of biometric data for purposes other than law enforcement. 2、The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm that would be caused if the system were not used; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. 3. For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. The competent judicial authority or an independent administrative authority whose decision is binding shall grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding on the request, that authority shall take into account the elements referred to in paragraph 2. No decision that produces an adverse legal effect on a person may be taken based solely on the output of the ‘real-time’ remote biometric identification system. 4. Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data. 5. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement within the limits and under the conditions listed in paragraph 1, first subparagraph, point (h), and paragraphs 2 and 3. Member States concerned shall lay down in their national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, first subparagraph, point (h), including which of the criminal offences referred to in point (h)(iii) thereof, the competent authorities may be authorised to use those systems for the purposes of law enforcement. Member States shall notify those rules to the Commission at the latest 30 days following the adoption thereof. Member States may introduce, in accordance with Union law, more restrictive laws on the use of remote biometric identification systems. 6. National market surveillance authorities and the national data protection authorities of Member States that have been notified of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes pursuant to paragraph 4 shall submit to the Commission annual reports on such use. For that purpose, the Commission shall provide Member States and national market surveillance and data protection authorities with a template, including information on the number of the decisions taken by competent judicial authorities or an independent administrative authority whose decision is binding upon requests for authorisations in accordance with paragraph 3 and their result. 7. The Commission shall publish annual reports on the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, based on aggregated data in Member States on the basis of the annual reports referred to in paragraph 6. Those annual reports shall not include sensitive operational data of the related law enforcement activities. 8. This Article shall not affect the prohibitions that apply where an AI practice infringes other Union law. |
第五条 禁止性人工智能活动 1、禁止利用人工智能实施下列活动: (a)将部署了超出个人意识的潜意识技术或故意操纵或欺骗技术、目的或效果在于通过明显损害个人或群体做出明智决定的能力严重扭曲个人或群体行为,使其做出他们不会以其他方式做出的决定,从而造成或合理地可能造成本人、他人或其他群体严重伤害的人工智能系统投放到市场、投入使用或使用; (b)将利用自然人或特定群体因年龄、残障、特定社会或经济状况而面临的弱势地位,目的或效果在于造成或合理地预计可能造成该主体或他人严重损害的方式实质性地扭曲该自然人或该群体成员行为的人工智能系统投放到市场、投入使用或使用; (c)投放到市场、投入使用或使用的人工智能系统,被用于在一段时间内根据自然人或群体的社会行为或者已知、推断或预测的其个体特征或性格特征对其进行评估、分类或社会评分,并产生以下任意一种或两种后果的: (i)在与系统最初生成或收集数据的环境无关的社会环境中,以损害或不利于特定自然人或群体方式对待他们; (ii)使特定自然人或群体遭受不公正,或者与其社会行为或其严重程度不成比例的损害或不利情况; (d)将针对自然人进行风险评估,以便仅根据系统分析或评估的该自然人人格特征和特质来评估或预测其实施犯罪行为的风险的人工智能系统投放到市场、为该特定目的投入使用或使用;但该项禁止性规定并不适用于帮助人类对基于与犯罪活动直接相关的客观可核查事实对特定自然人参与犯罪活动的情况进行评估的人工智能系统; (e)将从互联网或闭路电视录像中无针对性地抓取面部图像创建或扩展面部识别数据库的人工智能系统投放到市场、为该特定目的投入使用或使用; (f)将在工作场所和教育机构中推断自然人的情绪的人工智能系统投放到市场、为该特定目的投入使用或使用,但该人工智能系统的使用是出于医疗或安全目的而投放到市场的除外; (g)将根据生物识别数据对自然人进行分类,以推断或推测其种族、政治立场、工会会员资格、宗教或哲学信仰、性生活或性取向的生物识别分类系统投放到市场、为该特定目的投入使用或使用;但该项禁止性规定不包括根据生物特征数据对合法获取的生物特征数据集(如图像)帖标签或过滤,也不包括在执法领域对生物特征数据进行分类; (h)除非为如下目的之一且绝对必要外,在公共场所为执法目的使用“实时”远程生物识别系统: (i)有针对性地搜寻绑架、人口贩卖或性剥削犯罪的具体受害者,以及搜寻失踪人员; (ii)预防对自然人的生命健康或人身安全构成具体、重大且迫在眉睫的威胁,或预防真实且迫在眉睫或真实且可预见的恐怖袭击威胁; (iii)定位或识别犯罪嫌疑人,以便对本法附录二中所述、在有关成员国最高可能被判处四年监禁或拘留的犯罪行为进行刑事调查或提起公诉,或执行刑事处罚措施。 上述第1款第(h)项规定不影响欧洲议会和欧盟理事会第2016/679号《关于在处理个人数据方面保护自然人和此类数据自由流动,并废除第95/46/EC号指令(一般数据保护条例)的条例》第9条关于为执法以外的目的处理生物特征数据的规定。 2、为实现本条第1款第(h)项规定的目标,为执法目的在公众可进出场所使用“实时”远程生物识别系统,应当仅可用于该项明确规定的目的锁定特定目标人员的身份,且应考虑以下因素: (a) 引发该系统可能使用的情况的性质,尤其是如果不使用该系统将会造成的危害的严重程度、可能性和大小; (b) 使用该系统对所有相关人员的权利和自由的影响,特别是这些影响的严重程度、可能性和大小。 此外,为实现本条第1款第(h)项所规定的目的,为执法目的在公众可进出场所使用“实时”远程生物识别系统,应当遵守授权使用该系统的成员国法律所规定的必要且适当的使用保障和条件,尤其是在时间、地点和个体限制方面。只有在执法机构完成本法第二十七条规定的基本权利影响评估,并根据本法第四十九条规定在欧盟数据库中完成该系统登记的情况下,才能授权其在公共场所使用该“实时”远程生物识别系统。但是,在有正当理由的紧急情况下,如果不存在不适当地延误登记,也可以在未完成欧盟数据库登记的情况下开始使用此类系统。 3、根据本条第1款第(h)项和第2款规定在公共场所为执法目的使用“实时”远程生物识别系统,应当事先获得司法机关或独立行政机关的批准,该项批准对使用该系统的成员国具有约束力,并应当根据本条第5款所述成员国法律的规定、基于申请人的合理请求签发。但是,在正当的紧急情况下,可以在未经批准的情况下开始使用此类系统,但最迟应当在24小时内立即提交申请。如果此类申请被拒绝,则应立即停止使用,所有数据和使用结果及输出应当被立即丢弃和删除。 主管司法机关或具有相应决定权的独立行政机关只有在根据其获取的客观证据或明确指示足以证明,使用有关的“实时”远程生物识别系统对于实现申请中所指明的、本条第1款第(h)项所规定目标之一确有必要且适当,尤其是仍然仅限于在绝对必要的时间、地点和个体范围内使用的情况下,才应当作出批准决定。在作出批准决定时,批准机关应当考虑本条第2款所规定的因素。不得仅根据“实时”远程生物识别系统的输出做出对个人产生不利法律影响的决定。 4、在不违反本条第3款的情况下,应当按照本条第5款所规定的国家规范,将每一次在公共场所为执法目的使用“实时”远程生物识别系统的情况通知相关市场监管机构和国家数据保护机构。通知应至少包含本条第6款规定的信息,且不应包含敏感的操作数据。 5、成员国可以制定规则,规定全部或部分批准相关机构在本条第1款第(h)项、第2款和第3款所规定的限制和条件下,为执法目的在公共场所使用“实时”远程生物识别系统的可能性。有关成员国应当在其本国法项下制定必要的具体规则,明确本条第3款所规定的申请、批准和实施以及监督和报告事宜。该等规则还应明确基于本条第1款第(h)项所规定的哪些目标,包括第(h)项第(iii)点所述的哪些犯罪行为,主管机关才有权为执法目的使用这些系统。各成员国最晚应当在通过具体规则后30天内通报欧盟委员会。各成员国可根据欧盟法,针对远程生物识别系统的使用制定更严格的法律。 6、根据本条第4款,各成员国的国家市场监管机构和国家数据保护机构在收到关于在公共场所为执法目的使用“实时”远程生物识别系统的通知后,应当向欧盟委员会提交关于此类系统使用情况的年度报告。为此,欧盟委员会应向各成员国及其国家市场监管机构和数据保护机构提供一个模板,包括主管司法机关或独立行政机关作出的、对根据本条第3款规定提出的申请具有约束力的批准的数量及其结果信息。 7、欧盟委员会应当在各成员国根据本条第6款规定提交的年度报告基础上,根据成员国汇总的数据,发布关于在公共场所为执法目的使用实时远程生物识别系统的年度报告。该等年度报告不得包含相关执法活动的敏感业务数据。 8、本条不影响人工智能活动违反其他欧盟法时可适用的禁止性规定。 |
CHAPTER III HIGH-RISK AI SYSTEMS |
第三章高风险人工智能系统 |
SECTION 1 Classification of AI systems as high-risk |
第一节 高风险人工智能系统的分类 |
Article 6 Classification rules for high-risk AI systems 1. Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I; (b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I. 2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be considered to be high-risk. 3. By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. The first subparagraph shall apply where any of the following conditions is fulfilled: (a) the AI system is intended to perform a narrow procedural task; (b) the AI system is intended to improve the result of a previously completed human activity; (c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or (d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III. Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons. 4. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment. 5. The Commission shall, after consulting the European Artificial Intelligence Board (the ‘Board’), and no later than 2 February 2026, provide guidelines specifying the practical implementation of this Article in line with Article 96 together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk. 6. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second subparagraph, of this Article by adding new conditions to those laid down therein, or by modifying them, where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III, but do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. 7. The Commission shall adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second subparagraph, of this Article by deleting any of the conditions laid down therein, where there is concrete and reliable evidence that this is necessary to maintain the level of protection of health, safety and fundamental rights provided for by this Regulation. 8. Any amendment to the conditions laid down in paragraph 3, second subparagraph, adopted in accordance with paragraphs 6 and 7 of this Article shall not decrease the overall level of protection of health, safety and fundamental rights provided for by this Regulation and shall ensure consistency with the delegated acts adopted pursuant to Article 7(1), and take account of market and technological developments. |
第六条 高风险人工智能系统的分类规则 1、无论人工智能系统是否独立于下列第(a)项和第(b)项所述产品被投放到市场或投入使用,只要满足下列两个条件,该人工智能系统都应被视为高风险系统: (a)被用作产品安全组件的人工智能系统或者直接作为产品的人工智能系统,是本法附录一中欧盟统一立法所涵盖的产品; (b)根据第(a)点规定安全组件为人工智能系统的产品,或人工智能系统本身即为产品的,需要进行第三方符合性评定,以便根据本法附录一所载的欧盟统一立法将该产品投放到市场或投入使用。 2、除本条第1款所述高风险人工智能系统外,本法附录三所载人工智能系统也应被视为高风险系统。 3、通过限缩本条第2款,如果本法附录三所述人工智能系统不会对自然人的健康、安全或基本权利构成重大损害风险,包括不会对决策结果产生重大影响,则不应被视为高风险系统。 满足以下任一条件时,适用前款规定: (a)人工智能系统旨在执行狭义的程序性任务; (b)人工智能系统旨在改善此前已经完成的人类活动的结果; (c)人工智能系统旨在检测决策模式或其与先前决策模式的差异,在未经适当的人工审查的情况下,并不代表取代人工审查或影响先前完成的人工评估;或 (d)人工智能系统旨在执行与本法附录三所列用例有关的评估准备任务。 虽然本款第一项有例外规定,但是,只要一个人工智能系统对自然人进行特征分析,均应按本法附录三被视为具有高风险。 4、认为本法附录三所述人工智能系统不具有高风险的提供者应当在将该系统投放到市场或投入使用之前记录其判定。该提供者应当遵守本法第四十九条第2款规定的登记义务,并应根据成员国主管机关的要求提供符合性评定文件。 5、欧盟委员会应当在征询欧洲人工智能委员会(“人工智能委员会”)的意见后,最晚于2026年2月2日根据本法第九十六条规定制定指导方针,详细说明本条的具体实施方式,并附一份高风险和非高风险人工智能系统用例的综合清单。 6、在有具体可靠的证据表明确实存在本法附录三所列范围内的人工智能系统不会对自然人的健康、安全或基本权利造成重大损害的情况下,欧盟委员会有权根据本法第九十七条制定规章条例,通过对上述规定增加新的条件来修订本条第3款第二项。 7、当有具体可靠的证据表明,为维持本法所规定的健康、安全和基本权利的保护水平,确有必要修改本条第3款第二项时,欧盟委员会应当根据本法第九十七条规定制定规章条例,通过删除任何现有条件修订本条第3款第二项。 8、根据本条第6款和第7款规定对本条第3款第二项规定的条件所作的任何修订,均不得降低本法规定的健康、安全和基本权利的整体保护水平,并应确保其内容与根据本法第七条第1款规定制定的规章条例保持一致,且应考虑到市场和技术的发展。
|
Article 7Amendments to Annex III 1. The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend Annex III by adding or modifying use-cases of high-risk AI systems where both of the following conditions are fulfilled: (a) the AI systems are intended to be used in any of the areas listed in Annex III; (b) the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III. 2. When assessing the condition under paragraph 1, point (b), the Commission shall take into account the following criteria: (a) the intended purpose of the AI system; (b) the extent to which an AI system has been used or is likely to be used; (c) the nature and amount of the data processed and used by the AI system, in particular whether special categories of personal data are processed; (d) the extent to which the AI system acts autonomously and the possibility for a human to override a decision or recommendations that may lead to potential harm; (e) the extent to which the use of an AI system has already caused harm to health and safety, has had an adverse impact on fundamental rights or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, as demonstrated, for example, by reports or documented allegations submitted to national competent authorities or by other reports, as appropriate; (f) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect multiple persons or to disproportionately affect a particular group of persons; (g) the extent to which persons who are potentially harmed or suffer an adverse impact are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome; (h) the extent to which there is an imbalance of power, or the persons who are potentially harmed or suffer an adverse impact are in a vulnerable position in relation to the deployer of an AI system, in particular due to status, authority, knowledge, economic or social circumstances, or age; (i) the extent to which the outcome produced involving an AI system is easily corrigible or reversible, taking into account the technical solutions available to correct or reverse it, whereby outcomes having an adverse impact on health, safety or fundamental rights, shall not be considered to be easily corrigible or reversible; (j) the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, including possible improvements in product safety; (k) the extent to which existing Union law provides for: (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages; (ii) effective measures to prevent or substantially minimise those risks. 3. The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend the list in Annex III by removing high-risk AI systems where both of the following conditions are fulfilled: (a) the high-risk AI system concerned no longer poses any significant risks to fundamental rights, health or safety, taking into account the criteria listed in paragraph 2; (b)the deletion does not decrease the overall level of protection of health, safety and fundamental rights under Union law. |
第七条 对本法附录三的修订 1、欧盟委员会有权根据本法第九十七条规定制定规章条例,通过增加或修改符合下列两个条件的高风险人工智能系统的用例来修改本法附录三: (a)人工智能系统拟被用于本法附录三所列的任何领域; (b)人工智能系统危害健康和安全,或对基本权利造成不利影响的风险,并且该风险大于或等于附录三中已经提到的高风险人工智能系统造成危害或不利影响的风险。 2、在评估本条第1款(b)项所述情形时,欧盟委员会应考虑以下标准: (a)人工智能系统的预期目的; (b)人工智能系统已经被使用或可能被使用的程度; (c)人工智能系统处理和使用的数据的性质和数据量,特别是是否处理特殊类型的个人数据; (d)人工智能系统的自动化程度,以及人类推翻可能造成潜在危害的决定或建议的可能性; (e)人工智能系统的使用在多大程度上已经造成健康和安全损害,对基本权利产生了不利影响,或者引起了人们对这种损害或不利影响发生可能性的重大关切;例如,有向国家主管机关提交的报告或被记录在案的指控,亦或者相关的其他报告证实; (f)此类危害或不利影响的潜在程度,特别是其强度、影响多人或不成比例地影响特定群体的能力; (g)可能遭受危害或不利影响的人们在多大程度上取决于人工智能系统产生的结果,尤其是因为受实际或法律因素影响而不存在避免该结果出现的合理可能性; (h) 权力失衡的程度,或者相对于人工智能系统的部署者,可能遭受损害或不利影响的人处于弱势地位,特别是因身份、权力、知识、经济或社会条件、年龄因素; (i)考虑到可用于纠正或逆转人工智能系统输出结果的技术解决方案,涉及人工智能系统的输出结果在多大程度上容易被纠正或逆转,基于此,对健康、安全或基本权利造成不利影响的这类结果不应被视为易于纠正或可逆; (j) 部署人工智能系统给个人、群体或整个社会带来的益处的大小和可能性,包括可能对产品安全的改进; (k)现行欧盟法就下列事项的现有规定: (i) 针对人工智能系统带来的风险采取有效的补救措施,不包括损害赔偿请求权; (ii)防止或大幅降低这些风险的有效措施。 3、欧盟委员会有权根据本法第九十七条制定规章条例,在满足下列两个条件的情况下,通过删除高风险人工智能系统来修改本法附录三中的清单: (a)考虑到本条第2款所规定标准,相关高风险人工智能系统不再对基本权利、健康或安全产生任何重大风险; (b)删除(高风险人工智能系统)不会降低欧盟法对健康、安全和基本权利的整体保护水平。 |
SECTION 2 Requirements for high-risk AI systems |
第二节 对高风险人工智能系统的要求 |
Article 8 Compliance with the requirements 1. High-risk AI systems shall comply with the requirements laid down in this Section, taking into account their intended purpose as well as the generally acknowledged state of the art on AI and AI-related technologies. The risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements. 2. Where a product contains an AI system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation listed in Section A of Annex I apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements under applicable Union harmonisation legislation. In ensuring the compliance of high-risk AI systems referred to in paragraph 1 with the requirements set out in this Section, and in order to ensure consistency, avoid duplication and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into documentation and procedures that already exist and are required under the Union harmonisation legislation listed in Section A of Annex I. |
第八条 符合要求(一般规定) 1、按系统的预期目的以及公认的人工智能和人工智能相关技术的最新发展,高风险人工智能系统应当符合本节规定的要求。在确保符合上述要求的同时,还应考虑本法第九条所述的风险管理系统。 2、如果产品中包含可适用本法和本法附录一第A条所列欧盟统一立法所规定要求的人工智能系统,该系统的提供者负责确保其产品完全符合所适用的欧盟统一立法规定的所有可适用要求。为确保本条第1款所述高风险人工智能系统符合本节规定的要求,并确保一致性、避免重复和尽量减少额外负担,系统提供者应当视情况选择将其提供的、与产品有关的必要测试和报告流程、信息和文档整合到按本法附录一第A条项下欧盟统一立法规定现存和要求的文档和程序中。 |
Article 9 Risk management system 1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. 2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: (a) the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose; (b) the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse; (c) the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72; (d) the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a). 3.The risks referred to in this Article shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information. 4. The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Section, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements. 5. The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable. In identifying the most appropriate risk management measures, the following shall be ensured: (a) elimination or reduction of risks identified and evaluated pursuant to paragraph 2 in as far as technically feasible through adequate design and development of the high-risk AI system; (b) where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated; (c) provision of information required pursuant to Article 13 and, where appropriate, training to deployers. With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used. 6. High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section. 7. Testing procedures may include testing in real-world conditions in accordance with Article 60. 8. The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system. 9. When implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups. 10. For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under other relevant provisions of Union law, the aspects provided in paragraphs 1 to 9 may be part of, or combined with, the risk management procedures established pursuant to that law. |
第九条 风险管理体系 1、应当建立、实施、记录和维护与高风险人工智能系统相关的风险管理体系。 2、风险管理体系应当被理解为在高风险人工智能系统的整个生命周期中计划和运行的连续迭代过程,需要定期进行系统性审查和更新。它应当包括以下步骤: (a)识别和分析按照预期目的使用高风险人工智能系统时,其可能对健康、安全或基本权利带来的已知和合理可预见的风险; (b)预计和评估按照预期目的使用高风险人工智能系统并发生合理可预见的不当使用时,可能出现的风险; (c)根据对本法第七十二条规定的上市后监测系统所收集数据的分析,评估可能出现的其他风险; (d)采取适当和有针对性的风险管理措施,应对根据本条第(a)款确定的风险。 3、本条所述风险仅指通过开发或设计高风险人工智能系统或提供足够的技术信息可以合理减轻或消除的风险。 4、本条第2款(d)项所规定的风险管理措施应适当考虑本节所规定要求的综合应用所产生的影响和可能的相互作用,以便按这些要求采取措施达到适当平衡的同时更有效地降低风险。 5、本条第2款(d)项所述风险管理措施应确保与每种危害相关的相关剩余风险以及高风险人工智能系统的整体剩余风险处于可接受范围。 在确定最适当的风险管理措施时,应确保: (a)通过充分设计和开发高风险人工智能系统,在技术可行的范围内消除或降低根据本条第2款规定识别和评估的风险; (b)在适当的情况下,采取适当的缓解和控制措施,以应对无法消除的风险; (c)根据本法第十三条规定提供所需信息,并在适当情况下培训系统部署者。 为消除或降低与使用高风险人工智能系统有关的风险,应当适当考虑部署者的技术知识、经验、教育背景、正常的培训以及该系统的可能使用场景。 6、应当对高风险人工智能系统进行测试,确定最适当且最具针对性的风险管理措施。测试应确保高风险人工智能系统自始至终都能够实现其预期目的,并符合本节规定的要求。 7、测试过程可能包括按本法第六十条规定在真实环境中进行测试。 8、高风险人工智能系统的测试应当视具体情况在整个开发过程中的任何时间进行,且应当在系统被投放到市场或投入使用之前完成。测试应当按预先确定的、与高风险人工智能系统的预期目的相适应的指标和概率阈值进行。 9、在落实本条第1款至第7款规定的风险管理体系时,系统提供者应当考虑高风险人工智能系统的预期目的是否可能对18周岁以下人群和特定情形下的其他弱势群体产生不利影响。 10、对于按欧盟法律的其他相关规定须遵守内部风险管理流程要求的高风险人工智能系统提供者,本条第1款至第9段规定的措施可能是其根据该等欧盟法律规定建立的风险管理程序的一部分,也可能两者相结合。 |
Article 10 Data and data governance 1. High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such data sets are used. 2. Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: (a) the relevant design choices; (b) data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; (c) relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment and aggregation; (d) the formulation of assumptions, in particular with respect to the information that the data are supposed to measure and represent; (e) an assessment of the availability, quantity and suitability of the data sets that are needed; (f) examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations; (g) appropriate measures to detect, prevent and mitigate possible biases identified according to point (f); (h) the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed. 3. Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof. 4. Data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting within which the high-risk AI system is intended to be used. 5. To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: (a) the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data; (b) the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state-of-the-art security and privacy-preserving measures, including pseudonymisation; (c) the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations; (d) the special categories of personal data are not to be transmitted, transferred or otherwise accessed by other parties; (e) the special categories of personal data are deleted once the bias has been corrected or the personal data has reached the end of its retention period, whichever comes first; (f) the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data. 6. For the development of high-risk AI systems not using techniques involving the training of AI models, paragraphs 2 to 5 apply only to the testing data sets. |
第十条 数据和数据治理 1、如果高风险人工智能系统利用通过数据训练人工智能模型的技术,该系统的开发应当以符合本条第2款至第5款规定的质量标准的训练、验证和测试数据集为基础。 2、培训、验证和测试数据集应当遵循与高风险人工智能系统预期目的相匹配的数据治理和管理实践。这些实践应当特别关注: (a)相关的设计选择; (b)数据收集过程和数据来源,当数据为个人数据时,收集数据的初衷; (c)相关的数据准备处理操作,如注释、标记、清理、更新、丰富和聚合; (d)假设的制定,尤其是关于数据应当衡量和表达的信息; (e)对所需数据集的可用性、数量和是否合适进行评估; (f)审查可能影响人们的健康和安全、对基本权利产生负面影响或造成欧盟法所禁止的歧视发生的可能偏见,尤其是在数据输出影响未来运营输入的情况下; (g)采取适当措施检测、预防和减少本条第(f)款所规定的可能偏见; (h)查明阻碍遵守本法规定的相关数据缺口或不足,以及如何解决这些缺口和不足。 3、培训、验证和测试数据集应当具有相关性、足够有代表性,并尽可能无错误,且从预期目角度应当完整。数据集应当具有适当的统计特性,包括与高风险人工智能系统拟用于的个体或群体有关的统计特性(如适用)。数据集的上述特征可以体现在单个数据集中,也可以体现在数据集组合中。 4、在(系统的)预期目的所要求的范围内,数据集应当考虑高风险人工智能系统拟用于的特定地理、语境、行为或功能设置所独有的特征或要素。 5、在某种程度上,为确保本条第2款第(f)和(g)项所述高风险人工智能系统的偏见被检测且校正绝对必要,此类系统的提供者可以在特定情况下处理特定类别的个人数据,但应当对自然人的基本权利和自由采取适当的保障措施。除欧洲议会和欧盟理事会第2016/679号和第2018/1725号条例以及第16/680号指令另有规定外,须满足下列全部条件方可实施上述数据处理活动: (a)通过处理其他数据(包括合成或匿名数据)无法有效完成偏见检测和校正; (b)该等特定类别的个人数据受个人数据重复使用技术限制、最先进的安全和隐私保护措施的约束,包括假名化要求; (c)对该等特定类别的个人数据采取措施,确保所处理的个人数据得到保护;采取包括严格控制和记录数据接入等适当的保护措施,以避免(数据)被滥用并确保只有经授权且负有适当保密义务的人员才能访问这些个人数据; (d)其他主体不得传输、转移或以其他方式访问该等特定类别的个人数据; (e)一旦偏见得到纠正或个人数据保存周期届满(以先到者为准),立即删除该等特定类别的个人数据; (f)根据欧洲议会和欧盟理事会第2016/679号和第2018/1725号条例以及第16/680号指令规定,数据处理活动的记录包括处理该等特殊类别的个人数据对于检测和纠正偏见绝对必要的原因,以及处理其他数据无法实现该目标的原因。 6、对于未使用人工智能模型训练技术的高风险人工智能系统的开发,本条第2款至第5款仅适用于其测试数据集。 |
Article11Technical documentation 1. The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment. 2. Where a high-risk AI system related to a product covered by the Union harmonisation legislation listed in Section A of Annex I is placed on the market or put into service, a single set of technical documentation shall be drawn up containing all the information set out in paragraph 1, as well as the information required under those legal acts. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex IV, where necessary, to ensure that, in light of technical progress, the technical documentation provides all the information necessary to assess the compliance of the system with the requirements set out in this Section. |
第十一条 技术文档 1、高风险人工智能系统的技术文档应当在该系统被投放到市场或投入使用之前编制,并应持续最新。 技术文档的编制方式应当能够证明该高风险人工智能系统符合本节规定的要求,并以清晰全面的形式向国家主管机关和评定机构提供必要的信息,以评定该人工智能系统是否符合这些要求。技术文档至少应当包含本法附录四所列的要素。中小企业(包括初创企业)可以以简化的方式提供附录四中规定的技术文档要素。为此,欧盟委员会应当针对小微企业的需求制定一份的简化版技术文档表格。如果中小企业(包括初创企业)选择以简化版方式提供附录四要求的信息,则应当使用本款所规定的表格。为实现符合性评定目的,评定机构应当接受该表格。 2、如果与附录一第a条所列欧盟统一立法所涵盖的产品相关的高风险人工智能系统被投放到市场或投入使用,则应当编制一套独立技术文档,其中应当包含本条第1款规定的所有信息以及相关法案要求的信息。 随着技术进步,为确保技术文档提供评定系统是否符合本节规定要求所需的所有信息,欧盟委员会有权在必要时根据本法第九十七条规定制定规章条例修订本法附录四。 |
Article 12 Record-keeping 1. High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system. 2. In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: (a)identifying situations that may result in the high-risk AI system presenting a risk within the meaning of Article 79(1) or in a substantial modification; (b)facilitating the post-market monitoring referred to in Article 72; and (c)monitoring the operation of high-risk AI systems referred to in Article 26(5). 3. For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: (a) recording of the period of each use of the system (start date and time and end date and time of each use); (b) the reference database against which input data has been checked by the system; (c)the input data for which the search has led to a match; (d) the identification of the natural persons involved in the verification of the results, as referred to in Article 14(5). |
第十二条 记录保存 1、高风险人工智能系统应当在技术上支持在系统生命周期内自动记录事件(日志)。 2、为确保高风险人工智能系统功能的可追溯性水平与系统的预期目的相匹配,其日志记录功能应当能够记录下列相关事件: (a)识别可能导致高风险人工智能系统出现本法第七十九条第1款所规定风险或实质性修改的情况; (b)促进本法第七十二条所规定的上市后监测;以及 (c)监控本法第二十六条第5款所规定的高风险人工智能系统的运行情况。 3、对于本法附录三第1条(a)款所载高风险人工智能系统,日志记录功能应当至少确保: (a)记录系统的每次使用时间(每次使用的开始日期和时点以及结束日期和时点); (b)系统核验输入数据所依据的参考数据库; (c)搜索后匹配的输入数据; (d)本法第十四条第5款所规定的参与结果验证的自然人的身份。 |
Article 13 Transparency and provision of information to deployers 1. High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3. 2. High-risk AI systems shall be accompanied by instructionsfor use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers. 3. The instructions for use shall contain at least the following information: (a) the identity and the contact details of the provider and, where applicable, of its authorised representative; (b) the characteristics, capabilities and limitations of performance of the high-risk AI system,including: (i) its intended purpose; (ii) the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; (iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2); (iv) where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information that is relevant to explain its output; (v) when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used; (vi) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system; (vii) where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately; (c) the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any; (d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers; (e) the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; (f) where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12. |
第十三条 透明度和向部署者提供信息 1、高风险人工智能系统的设计和开发操作应当确保足够透明,以便部署者能够恰当地解释和使用系统的输出。根据本章第三节规定的提供者和部署者的相关义务,应当确保透明度的类型和程度适当。 2、高风险人工智能系统应当以适当数字化形式或其他方式附上使用说明书,包括与部署者有关且其可获取、可理解的简洁、完整、准确和清晰的信息。 3、使用说明书应至少包含以下信息: (a)提供者及其授权代表(如适用)的身份信息和联系方式; (b)高风险人工智能系统的特征、性能和功能限制,包括: (i) 其预期目的; (ii)准确度,包括本法第十五条所规定的高风险人工智能系统经过测试和验证的指标、稳健性和网络安全,以及可能对预期的准确性、稳健性和安全性产生影响的任何已知和可预见的情况; (iii)涉及按预期目的或在合理可预见的被滥用情形下使用高风险人工智能系统、可能导致本法第九条第2款所规定的健康和安全或基本权利面临风险的任何已知或可预见的情况; (iv)高风险人工智能系统提供与解释其输出相关的信息的技术能力和特征(如适用); (v) 适当时,该系统对其拟用于的特定人员或群体的表现; (vi)适当时,考虑到高风险人工智能系统的预期目的,其输入数据的规格或所使用的培训、验证和测试数据集的任何其他相关信息; (vii)使部署者能够解释并适当地使用高风险人工智能系统的输出的信息(如适用); (c)提供者在初始符合性评定时预先认定的高风险人工智能系统及其性能的变化(如有); (d)本法第十四条所规定的人工监督措施,包括为便于部署者解释高风险人工智能系统的输出而采取的技术措施; (e)确保该人工智能系统正常运行(包括软件的更新)所需的计算和硬件资源、高风险人工智能系统的预期寿命以及任何必要的维护和保养措施(包括维修保养频次); (f)描述高风险人工智能系统中包含的、允许部署者根据本法第十二条规定正确收集、存储和解释日志的机制(如涉及)。 |
Article 14 Human oversight 1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use. 2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section. 3. The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures: (a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service; (b) measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer. 4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, asappropriate and proportionate: (a) to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; (b) to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons; (c) to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available; (d) to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system; (e) to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state. 5. For high-risk AI systems referred to in point 1(a) of Annex III,the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority. The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate. |
第十四条 人工监督 1、高风险人工智能系统的设计和开发应当确保自然人在使用(系统)期间能够有效地对系统进行监督,包括有适当的人机接口工具。 2、人工监督的目的在于防止或尽量减少在按照预期目的或在合理可预见的滥用条件下使用高风险人工智能系统时可能出现的健康、安全或基本权利风险,尤其是在即使遵守本节规定的其他要求但此类风险仍然存在的情况下。 3、监督措施应当与高风险人工智能系统的风险、自动化程度和使用场景相适应,并应通过以下一种或两种措施实现: (a)如果技术上可行,提供者在将高风险人工智能系统投放到市场或投入使用之前,已经在该系统中确认并配置措施; (b)在将高风险人工智能系统投放到市场或投入使用之前,由提供者确定适合部署者采取的措施。 4、为执行本条第1款、第2款和第3款规定,应当向部署者提供高风险人工智能系统,使得负责人工监督的自然人能够适当且恰当地做到: (a)正确了解高风险人工智能系统的相关能力和局限性,并能够及时监控其运行情况,包括检测和解决异常、功能障碍和意外性能; (b)一直意识到机械地依赖或过度依赖高风险人工智能系统(尤其是用于为自然人做出决策提供信息或建议的高风险人工智能系统)产生的输出这个可能的趋势(自动化偏见); (c)正确解释高风险人工智能系统的输出,例如考虑到可用的解释工具和方法; (d)在任何特定情况下,决定不使用高风险人工智能系统或以其他方式忽略、覆盖或逆转高风险人工智能系统的输出; (e)干预高风险人工智能系统的运行,或通过“停止”按钮或使系统在安全状态下停止的类似程序中断系统运行。 5、对于本法附录三第1条(a)款所述的高风险人工智能系统,(采取)本条第3款所规定的措施应当确保部署者不会根据系统产生的识别结果采取任何行动或作出任何决定,但该识别结果已由至少两名有必要的能力、培训经验和权限的自然人分别核实和确认的除外。 如果欧盟或成员国法律认为适用前款所述至少由两名自然人分别验证的要求不符合比例原则,则该要求不适用于用作执法、移民、边境管制或政治庇护目的的高风险人工智能系统。 |
Article 15 Accuracy, robustness and cybersecurity 1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. 2. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholders and organisations such as metrology and benchmarking authorities, encourage, as appropriate, the development of benchmarks and measurement methodologies. 3. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use. 4. High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard. The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans. High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures. 5. High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities. The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. |
第十五条 准确性、稳健性和网络安全 1、设计和开发的高风险人工智能系统应当达到并在其整个生命周期内均保持适当的准确性、稳健性和网络安全水平。 2、为解决衡量本条第1款规定的适当的准确性和稳健性水平以及其他相关绩效指标的技术问题,欧盟委员会应当与计量和基准机构等相关利害关系人和组织合作,根据具体情况鼓励制定标准和测量方法。 3、高风险人工智能系统的准确性和相关准确度指标应当在系统附带的使用说明书中声明。 4、高风险人工智能系统应当有应对系统或系统运行环境中可能出现的错误、故障或不一致的最大程度的弹性,尤其是因为系统与自然人或其他系统的相互作用。在这方面应当采取技术和组织措施。 高风险人工智能系统的稳健性可以通过技术冗余解决方案来实现,其中可能包括备份或故障安全计划。 被投放到市场或投入使用后仍继续学习的高风险人工智能系统,其开发应当尽可能消除或降低可能的偏见输出对未来运营输入(即反馈回路)造成影响的风险,并确保能够通过合适的缓解措施适当地解决所有此类反馈回路。 5、高风险人工智能系统应当能够抵御未经授权的第三方试图利用系统漏洞改变其使用、输出或性能的行为。 旨在确保高风险人工智能系统网络安全的技术解决方案应当与相关情况和风险相适应。 在适当的情况下,解决人工智能特定漏洞的技术解决方案应当包括预防、检测、回应、解决和控制如下攻击的措施:试图操纵训练数据集(数据中毒)或训练中所使用的预训练组件(模型中毒)、意在引发人工智能模型出错的输入(对抗性示例或模型规避)、保密性或模型缺陷。 |
SECTION 3 Obligations of providers and deployers of high-risk AI systems and other parties |
第三节 高风险人工智能系统提供者、部署者和其他主体的义务 |
Article 16 Obligations of providers of high-risk AI systems Providers of high-risk AI systems shall: (a) ensure that their high-risk AI systems are compliant with the requirements set out in Section 2; (b) indicate on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable, their name, registered trade name or registered trade mark, the address at which they can be contacted; (c) have a quality management system in place which complies with Article 17; (d) keep the documentation referred to in Article 18; (e) when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in Article 19; (f) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, prior to its being placed on the market or put into service; (g) draw up an EU declaration of conformity in accordance with Article 47; (h) affix the CE marking to the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, to indicate conformity with this Regulation, in accordance with Article 48; (i) comply with the registration obligations referred to in Article 49(1); (j) take the necessary corrective actions and provide information as required in Article 20; (k) upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2; (l) ensure that the high-risk AI system complies with accessibility requirements in accordance with Directives (EU) 2016/2102 and (EU) 2019/882. |
第十六条 高风险人工智能系统提供者的义务 高风险人工智能系统的提供者应承担以下义务: (a)确保其高风险人工智能系统符合本章第二节规定的要求; (b)在高风险人工智能系统中,或者在其包装或附带文件中(如无法在系统中注明)(如适用)注明提供者的名称、注册商品名或注册商标以及有效的通讯地址; (c)有符合本法第十七条规定的质量管理体系; (d)保存本法第十八条所规定的文件; (e)当系统在其控制下时,保留本法第十九条规定的高风险人工智能系统自动生成的日志; (f)确保高风险人工智能系统被投放到市场或投入使用之前,根据本法第四十三条所规定履行相应的符合性评定程序; (g)根据本法第四十七条规定起草欧盟符合性声明; (h)根据本法第四十八条规定,将CE标识标注在高风险人工智能系统中,或者其包装或附带文件上(如无法在系统中标注),以表明系统符合本法规定; (i)遵守本法第四十九条第1款规定的登记义务; (j)采取必要的纠正措施,并按照本法第二十条规定提供信息; (k)根据国家主管机关的合理要求,证明高风险人工智能系统符合本章第二节规定的要求; (l)确保高风险人工智能系统符合欧洲议会和欧盟理事会第2016/2102号、第2019/882号指令的无障碍要求。 |
Article 17 Quality management system 1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions,and shall include at least the following aspects: (a)a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system; (b) techniques, procedures and systematic actions to be used for the design, design control and design verification of the high-risk AI system; (c) techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of the high-risk AI system; (d) examination, test and validation procedures to be carried out before, during and after the development of the high-riskAI system, and the frequency with which they have to be carried out; (e) technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full or do not cover all of the relevant requirements set out in Section 2, the means to be used to ensure that the high-risk AI system complies with those requirements; (f) systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purpose of the placing on the market or the putting into service of high-risk AI systems; (g) the risk management system referred to in Article 9; (h) the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 72; (i) procedures related to the reporting of a serious incident in accordance with Article 73; (j) the handling of communication with national competent authorities, other relevant authorities, including those providing or supporting the access to data, notified bodies, other operators, customers or other interested parties; (k) systems and procedures for record-keeping of all relevant documentation and information; (l) resource management, including security-of-supply related measures; (m) an accountability framework setting out the responsibilities of the management and other staff with regard to all the aspects listed in this paragraph. 2. The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation. Providers shall, in any event, respect the degree of rigour and the level of protection required to ensure the compliance of their high-risk AI systems with this Regulation. 3. Providers of high-risk AI systems that are subject to obligations regarding quality management systems or an equivalent function under relevant sectoral Union law may include the aspects listed in paragraph 1 as part of the quality management systems pursuant to that law. 4. For providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the obligation to put in place a quality management system, with the exception of paragraph 1, points (g), (h) and (i) of this Article, shall be deemed to be fulfilled by complying with the rules on internal governance arrangements or processes pursuant to the relevant Union financial services law. To that end, any harmonised standards referred to in Article 40 shall be taken into account. |
第十七条 质量管理体系 1、高风险人工智能系统的提供者应当建立质量管理体系,确保符合本法规定。质量管理体系应当以书面政策、程序和指示的形式系统且有序地记录,并至少应包含如下方面: (a)监管合规战略,包括遵守符合性评定程序和高风险人工智能系统修改管理程序; (b)用作高风险人工智能系统的设计、设计控制和设计验证的技术、程序和系统性措施; (c)用作高风险人工智能系统的开发、质量控制和质量保障的技术、程序和系统性措施; (d)在高风险人工智能系统的开发前、开发期间和开发后进行的检查、测试和验证程序,及其必要的频次; (e)所适用的技术规范(包括标准)以及当相关统一标准未完全适用或未涵盖本章第2节规定的全部相关要求时,用于确保高风险人工智能系统符合这些要求的方法; (f)数据管理系统和程序,包括数据采集、数据收集、数据分析、数据标记、数据存储、数据过滤、数据挖掘、数据聚合、数据保留以及在高风险人工智能系统被投放到市场或投入使用之前为此而进行的、与数据有关的任何其他操作; (g)本法第九条所规定的风险管理系统; (h)根据本法第七十二条规定建立、执行和维护上市后监测系统; (i)根据本法第七十三条规定报告重大事件的程序; (j)处理与国家主管机关、其他相关机关(包括提供或支持获取数据的机关)、评定机构、其他运营方、客户或其他利益相关方的沟通事宜; (k)所有相关文档和信息的记录保存系统和流程; (l)资源管理,包括措施相关的供给保障; (m)一个问责框架,规定管理者和其他工作人员在本款所规定各方面工作中的责任。 2、本条第1款所述各项措施应当与提供者的组织规模相匹配。无论如何,提供者都应当遵守所要求的严格程度和保护水平,以确保其高风险人工智能系统符合本法规定。 3、高风险人工智能系统的提供者,如果根据欧盟相关行业法律负有质量管理体系或同等功能的义务,其可以根据该法律规定将本条第1款所规定内容算作其质量管理体系的组成部分。 4、对于受欧盟金融服务法规定的内部治理、安排或流程要求约束的金融机构提供者,除本条第1款第(g)、(h)和(i)项外,应当视为其通过遵守相关欧盟金融服务法规定的内部治理安排或流程规则而履行了本法项下建立质量管理体系的义务。为此,本法第四十条中规定的任何统一标准均应当被考虑到。 |
Article 18 Documentation keeping 1. The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: (a) the technical documentation referred to in Article 11; (b) the documentation concerning the quality management system referred to in Article 17; (c)the documentation concerning the changes approved by notified bodies, where applicable; (d) the decisions and other documents issued by the notified bodies, where applicable; (e) the EU declaration of conformity referred to in Article 47. 2. Each Member State shall determine conditions under which the documentation referred to in paragraph 1 remains at the disposal of the national competent authorities for the period indicated in that paragraph for the cases when a provider or its authorised representative established on its territory goes bankrupt or ceases its activity prior to the end of that period. Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the technical documentation as part of the documentation kept under the relevant Union financial services law. |
第十八条 文档保存 1、自高风险人工智能系统被投放到市场或投入使用后十年内,提供者应当将以下信息交由国家主管机关处置: (a)本法第十一条所规定的技术文档; (b)本法第十七条所规定的质量管理体系文档; (c)评定机构批准的变更相关的文件(如适用); (d)评定机构签发的决定和其他文件(如适用); (e)本法第四十七条所规定的欧盟符合性声明。 2、各成员国自行决定,在其境内设立的提供者或其授权代表在本条第1款所规定的期限内破产或停业时,本条第1款所规定文档仍由其本国主管机关在所规定期限内处置的条件。 受欧盟金融服务法规定的内部治理、安排或流程要求约束的金融机构提供者,应当将上述技术文档作为相关欧盟金融服务法规定的文档的一部分进行维护。 |
Article 19Automatically generated logs 1. Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in the applicable Union or national law, in particular in Union law on the protection of personal data. 2. Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs automatically generated by their high-risk AI systems as part of the documentation kept under the relevant financial services law. |
第十九条 自动生成日志 1、高风险人工智能系统的提供者应当保存本法第十二条第1款规定的、其高风险人工智能系统自动生成的日志,只要该等日志在其控制范围内。在不违反适用的欧盟和成员国法律规定的情况下,上述日志的保存时间应当与高风险人工智能系统的预期目的相适应,至少六个月。但所适用的欧盟或成员国法律,尤其是欧盟关于保护个人数据的法律另有规定的除外。 2、受欧盟金融服务法规定的内部治理、安排或流程要求约束的金融机构提供者,应当将其高风险人工智能系统自动生成的日志作为其按相关金融服务法规定保存的文档的一部分进行维护。 |
Article 20 Corrective actions and duty of information 1. Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly. 2. Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken. |
第二十条 补救措施和信息义务 1、高风险人工智能系统的提供者如果认为或有理由认为他们投放到市场或投入使用的高风险人工智能系统不符合本法规定,应当立即采取必要的纠正措施使该系统符合本法要求,撤回、禁用或召回系统(视情况而定)。应当通知相关高风险人工智能系统的分销者,并通知部署者、授权代表和进口方(如适用)。 2、如果高风险人工智能系统存在本法第七十九条第1款所规定的风险,并且提供者意识到该风险,其应当立即与报告风险的部署者(如适用)合作调查原因,并(将情况)告知主管该高风险人工智能系统的市场监管机构和根据本法第四十四条为该高风险人工智能系统颁发证书的评定机构(如适用),尤其是(系统)不合规的性质和采取的任何纠正措施。 |
Article 21 Cooperation with competent authorities 1. Providers of high-risk AI systems shall, upon a reasoned request by a competent authority, provide that authority all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2, in a language which can be easily understood by the authority in one of the official languages of the institutions of the Union as indicated by the Member State concerned. 2. Upon a reasoned request by a competent authority, providers shall also give the requesting competent authority, as applicable, access to the automatically generated logs of the high-risk AI system referred to in Article 12(1), to the extent such logs are under their control. Any information obtained by a competent authority pursuant to this Article shall be treated in accordance with the confidentiality obligations set out in Article 78. |
第二十一条 与主管机关合作 1、高风险人工智能系统的提供者应当在主管机关提出合理要求后,以主管机关容易理解的语言(相关成员国指定的欧盟各机构官方语言之一),向该主管机关提供所有必要的信息和文档,以证明其高风险人工智能系统符合本章第二节规定的要求。 2.在主管机关提出合理要求后,提供者还应视情况允许提出请求的主管机关访问其控制范围内的、本法第十二条第1款规定的高风险人工智能系统自动生成的日志(如适用)。 主管机关根据本条规定获取的所有信息均应当按照本法第七十八条规定的保密义务采取保密措施。 |
Article 22 Authorised representatives of providers of high-risk AI systems 1. Prior to making their high-risk AI systems available on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union. 2. The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider. 3. The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institutions of the Union, as indicated by the competent authority. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: (a) verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider; (b) keep at the disposal of the competent authorities and national authorities or bodies referred to in Article 74(10), for a period of 10 years after the high-risk AI system has been placed on the market or put into service, the contact details of the provider that appointed the authorised representative, a copy of the EU declaration of conformity referred to in Article 47, the technical documentation and, if applicable, the certificate issued by the notified body; (c) provide a competent authority, upon a reasoned request, with all the information and documentation, including that referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2, including access to the logs, as referred to in Article 12(1), automatically generated by the high-risk AI system, to the extent such logs are under the control of the provider; (d) cooperate with competent authorities, upon a reasoned request, in any action the latter take in relation to the high-risk AI system, in particular to reduce and mitigate the risks posed by the high-risk AI system; (e) where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Section A of Annex VIII is correct. The mandate shall empower the authorised representative to be addressed, in addition to or instead of the provider, by the competent authorities, on all issues related to ensuring compliance with this Regulation. 4. The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall immediately inform the relevant market surveillance authority, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons therefor. |
第二十二条 高风险人工智能系统提供者的授权代表 1、在向欧盟市场供应高风险人工智能系统之前,注册地位于第三国的提供者应当通过书面形式任命一名在欧盟设立的授权代表。 2、提供者应当确保其授权代表能够执行提供者所出具授权书中规定的任务。 3、授权代表应当执行提供者所出具授权书中规定的任务。应当按照主管当局的要求和指示,以欧盟机构的一种官方语言向市场监管机构提供一份授权书副本。为本法之目的,提供者应当委托授权代表执行以下任务: (a)核实是否已经编制本法第四十七条规定的欧盟符合性声明和第十一条规定的技术文档,以及提供者是否已履行适当的符合性评估程序; (b)在高风险人工智能系统被投放到市场或投入使用后10年内,将委托授权代表的提供者的详细联系方式、本法第四十七条所规定的欧盟符合性声明副本、技术文档以及评定机构颁发的证书(如适用)交由第七十四条第10款规定的欧盟主管机关和成员国主管机关或机构处置; (c)根据主管机关的合理要求,向其提供所有必要的信息和文档(包括本款(b)项所规定的信息和文档),以证明其高风险人工智能系统符合本章第二节规定的要求,包括获取提供者控制范围内的、本法第十二条第1款规定的高风险人工智能系统自动生成的日志; (d)根据主管机关的合理要求,在其针对高风险人工智能系统采取的任何行动中与其合作,尤其是降低和缓和高风险人工智能系统带来的风险; (e)遵守本法第四十九条第1款所规定的登记义务(如适用),或者,如果登记是由提供者自行完成,则确保本法附录八第a条第3款所规定的信息正确。 授权书应当载明,授权代表是除提供者以外或代替提供者,解决主管机关提出的、确保提供者遵守本法规定所涉及的所有问题。 4、如果授权代表认为或有理由认为提供者的行为违反本法规定的义务,其应当终止授权。发生此种情况时,授权代表应当立即通知相关市场监管机构,并告知相关评定机构(如适用)授权终止及终止原因。 |
Article23 Obligations of importers 1. Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: (a) the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of the high-risk AI system; (b) the provider has drawn up the technical documentation in accordance with Article11 and Annex IV; (c) the system bears the required CE marking and is accompanied by the EU declaration of conformity referred to in Article 47 and instructions for use; (d) the provider has appointed an authorised representative in accordance with Article 22(1). 2. Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the importer shall inform the provider of the system, the authorised representative and the market surveillance authorities to that effect. 3. Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system and on its packaging or its accompanying documentation, where applicable. 4. Importers shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2. 5. Importers shall keep, for a period of 10 years after the high-risk AI system has been placed on the market or put into service, a copy of the certificate issued by the notified body, where applicable, of the instructions for use, and of the EU declaration of conformity referred to in Article 47. 6. Importers shall provide the relevant competent authorities, upon a reasoned request, with all the necessary information and documentation, including that referred to in paragraph 5, to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, they shall also ensure that the technical documentation can be made available to those authorities. 7. Importers shall cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system placed on the market by the importers, in particular to reduce and mitigate the risks posed by it. |
第二十三条 进口方的义务 1、在将高风险人工智能系统投放到市场之前,进口方应当验证以下事项确保该系统符合本法规定: (a)高风险人工智能系统的提供者已经履行本法第四十三条所规定的相关符合性评定程序; (b)提供者已经根据本法第十一条和附录四规定编制技术文档; (c)系统已经加贴所需的CE标识,并附有本法第四十七条所规定的欧盟符合性声明和使用说明; (d)提供者已经根据本法第二十二条第1款规定指定一名授权代表。 2、如果进口方有充足的理由认为高风险人工智能系统不符合本法规定、被伪造或附带伪造文档,则在系统符合要求之前,不得将该系统投放到市场。如果高风险人工智能系统存在本法第七十九条第1款所规定的风险,进口方应当通知系统提供者、授权代表和市场监管机构。 3、进口方应当在高风险人工智能系统及其包装或附带文档(如适用)中注明其名称、注册商品名称或注册商标以及通讯地址。 4、进口方应当确保其负责高风险人工智能系统时,储存或运输条件(如适用)不会阻碍系统符合本章第二节规定的要求。 5、进口方应当在高风险人工智能系统被投放到市场或投入使用后的10年内,保留一份由评定机构颁发的证书、使用说明和本法第四十七条所规定的欧盟符合性声明(如适用)复印件。 6、进口方应当按主管机关的合理要求,以主管机关易于理解的语言向其提供所有必要的信息和文档(包括本条第5款所规定的文档),用以证明高风险人工智能系统符合本章第二节规定的要求。为此,进口方还应当确保向上述主管机关提供技术文档。 7、相关主管机关对进口方投放到市场的高风险人工智能系统采取任何措施时,进口方应当予以配合,特别是为了降低和缓和其所带来的风险。 |
Article 24 Obligations of distributors 1. Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3). 2. Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system is not in conformity with the requirements set out in Section 2, it shall not make the high-risk AI system available on the market until the system has been brought into conformity with those requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect. 3. Distributors shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2. 4. A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken. 5. Upon a reasoned request from a relevant competent authority, distributors of a high-risk AI system shall provide that authority with all the information and documentation regarding their actions pursuant to paragraphs 1 to 4 necessary to demonstrate the conformity of that system with the requirements set out in Section 2. 6. Distributors shall cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system made available on the market by the distributors, in particular to reduce or mitigate the risk posed by it. |
第二十四条 分销方的义务 1、在将高风险人工智能系统推向市场之前,分销方应当验证其是否已经加贴所需的CE标识,是否附带本法第四十七条所规定的欧盟符合性声明和使用说明的副本,以及该系统的提供者和进口方(如适用)是否遵守本法第十六条第(b)款和第(c)款以及第二十三条第3款规定的各方义务。 2、如果分销方根据其掌握的信息认为或有理由认为高风险人工智能系统不符合本章第二节规定的要求,在该系统符合上述要求之前,不得在市场上供应(分销)该高风险人工智能系统。此外,如果高风险人工智能系统存在本法第七十九条第1款所规定的风险,分销方应当通知该系统的提供者或进口方(如适用)。 3、分销方应当确保其负责高风险人工智能系统时,储存或运输条件(如适用)不会阻碍系统符合本章第二节规定的要求。 4、分销方根据其掌握的信息认为或有理由认为其在市场上供应的高风险人工智能系统不符合本章第二节规定的要求的,应当采取必要的纠正措施,使该系统符合上诉要求、撤回或召回该系统,或者应当确保提供者、进口方或任何相关运营方(视情况而定)采取上述纠正措施。如果高风险人工智能系统存在本法第七十九条第1款所规定的风险,分销方应当立即通知该系统的提供者或进口方以及该高风险人工智能系统的相关主管机关,特别是应当详细说明不符合规定的具体情况和采取的任何纠正措施。 5、根据相关主管机关的合理要求,高风险人工智能系统的分销方应当向该主管机关提供与其根据本条第1款至第4款规定采取的行动有关的所有信息和文档,以证明该系统符合本章第二节规定的要求。 6、相关主管机关对分销方在市场上供应的高风险人工智能系统采取任何措施时,分销方应当予以配合,特别是为了降低或缓解其所带来的风险。 |
Article 25 Responsibilities along the AI value chain 1. Any distributor, importer, deployer or other third-party shall be considered to be a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: (a) they put their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligations are otherwise allocated; (b) they make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in such a way that it remains a high-risk AI system pursuant to Article 6; (c) they modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned becomes a high-risk AI system in accordance with Article 6. 2. Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation. 3. In the case of high-risk AI systems that are safety components of products covered by the Union harmonization legislation listed in Section A of Annex I, the product manufacturer shall be considered to be the provider of the high-risk AI system, and shall be subject to the obligations under Article 16 under either of the following circumstances: (a) the high-risk AI system is placed on the market together with the product under the name or trademark of the product manufacturer; (b) the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product has been placed on the market. 4. The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation. This paragraph shall not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a free and open-source licence. The AI Office may develop and recommend voluntary model terms for contracts between providers of high-risk AI systems and third parties that supply tools, services, components or processes that are used for or integrated into high-risk AI systems. When developing those voluntary model terms, the AI Office shall take into account possible contractual requirements applicable in specific sectors or business cases. The voluntary model terms shall be published and be available free of charge in an easily usable electronic format. 5. Paragraphs 2 and 3 are without prejudice to the need to observe and protect intellectual property rights, confidential business information and trade secrets in accordance with Union and national law. |
第二十五条 人工智能价值链上的责任 1、符合下列任一条件的分销方、进口方、部署者或其他第三方,均应被视为本法所规定的高风险人工智能系统的提供者,并应遵守本法第十六条规定的提供者义务: (a)将其名称或商标标注在已经投放到市场或投入使用的高风险人工智能系统上,但不影响相关主体签订合同另行约定其义务分担方式; (b)对已经投放到市场或已经投入使用的高风险人工智能系统进行实质性修改,使其仍然属于本法第六条规定的高风险人工智能系统; (c)修改尚未被认定为高风险并且已经被投放到市场或投入使用的人工智能系统的预期目的(包括通用人工智能系统),导致相关人工智能系统根据本法第六条规定成为高风险人工智能系统。 2、如果发生本条第1款所规定情形,就本法而言,最初将人工智能系统投放到市场或投入使用的提供者不再被视为该人工智能系统的提供者。原提供者应当与新提供者密切合作,提供必要的信息,并提供履行本法规定的义务所须的合理预期的技术渠道和其他协助,特别是为遵守高风险人工智能系统的符合性评定。如果初始提供者明确规定其人工智能系统不会被更改为高风险人工智能系统,因此没有义务移交文件,则本段不适用。 3、如果高风险人工智能系统是本法附录一第A条所列欧盟统一立法覆盖范围内产品的安全组件,则产品制造方应当被视为该高风险人工智能系统的提供者,并在出现下列任意一种情况时遵守本法第十六条规定的义务: (a)该高风险人工智能系统以产品制造方的名义或使用产品制造方的商标与产品一起被投放到市场; (b)在产品被投放到市场后,该高风险人工智能系统以产品制造商的名义或使用产品制造方的商标被投入使用。 4、为使高风险人工智能系统的提供者能够完全遵守本法规定的义务,高风险人工智能系统的提供者,和供应被用于或被集成到高风险人工智能系统中的人工智能系统、工具、服务、组件或流程的第三方应当签订书面合同,明确约定基于一般公认的技术水平所必要的信息、能力、技术渠道和其他协助。本款规定不适用于根据自由和开源许可证为公共工具、服务、流程或组件(通用人工智能模型除外)提供接口的第三方。 人工智能办公室可以为高风险人工智能系统提供者与供应被用于或被集成到高风险人工智能系统中的工具、服务、组件或流程的第三方之间的合同起草和推荐可自愿选择使用的示范条款。在起草上述参考性示范条款时,人工智能办公室应当考虑适用于特定行业或商业案例的可能合同要件。可自愿选择使用的示范条款应当以易于使用的电子版格式发布并免费提供。 5、本条第2款和第3款不影响根据欧盟和成员国法律遵守和保护知识产权、商业机密信息和商业秘密的要求。 |
Article 26 Obligations of deployers of high-risk AI systems 1. Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems, pursuant to paragraphs 3 and 6. 2. Deployers shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support. 3. The obligations set out in paragraphs 1 and 2, are without prejudice to other deployer obligations under Union or national law and to the deployer’s freedom to organise its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider. 4. Without prejudice to paragraphs 1 and 2, to the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system. 5. Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law. 6. Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data. Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs as part of the documentation kept pursuant to the relevant Union financial service law. 7. Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives. 8. Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor. 9. Where applicable, deployers of high-risk AI systems shall use the information provided under Article 13 of this Regulation to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680. 10. Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation, ex ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence. If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted. In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems. This paragraph is without prejudice to Article 9 of Regulation (EU) 2016/679 and Article 10 of Directive (EU) 2016/680 for the processing of biometric data. Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file and shall be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities. Deployers shall submit annual reports to the relevant market surveillance and national data protection authorities on their use of post-remote biometric identification systems, excluding the disclosure of sensitive operational data related to law enforcement. The reports may be aggregated to cover more than one deployment. Member States may introduce, in accordance with Union law, more restrictive laws on the use of post-remote biometric identification systems. 11. Without prejudice to Article 50 of this Regulation, deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons shall inform the natural persons that they are subject to the use of the high-risk AI system. For high-risk AI systems used for law enforcement purposes Article 13 of Directive (EU) 2016/680 shall apply. 12. Deployers shall cooperate with the relevant competent authorities in any action those authorities take in relation to the high-risk AI system in order to implement this Regulation. |
第二十六条 高风险人工智能系统部署者的义务 1、高风险人工智能系统的部署者应当采取适当的技术和组织措施,确保根据本条第3款和第6款规定按照系统附带的使用说明使用该系统。 2、部署者应当指派具备必要能力、培训经验和权力的自然人负责人工监督,并提供必要的支持。 3、本条第1款和第2款规定的义务不影响欧盟或成员国法律规定的其他部署者义务,也不影响部署者为执行提供者提出的人工监督措施而自有安排自身的资源和行为。 4、在不影响本条第1款和第2款规定的前提下,在部署者控制输入数据的情况下,部署者应当确保输入数据具有相关性并充分代表高风险人工智能系统的预期目的。 5、部署者应当根据使用说明监测高风险人工智能系统的运行情况,并在相关情形发生时根据本法第七十二条规定通知提供者。如果部署者有理由认为,根据指示使用高风险人工智能系统可能会导致该人工智能系统出现本法第七十九条第1款所规定的风险,部署者应当立即通知提供者或分销方以及相关市场监管机构,并应当暂停使用该系统。如果部署者发现重大事件,还应当立即首先通知提供者,然后通知进口方或分销方以及相关市场监管机构。如果部署者无法联系到提供者,参照适用本法第七十三条。上述义务不适用于被视作执法机构的人工智能系统的部署者的敏感操作数据。 对于受欧盟金融服务法关于其内部治理、安排或流程要求约束的金融机构部署者,其遵守相关金融服务法关于内部治理安排、流程和机制的规则,应被视为履行了本条前款规定的监测义务。 6、除适用的欧盟或成员国法律,特别是欧盟关于保护个人数据的法律另有规定外,高风险人工智能系统的部署者应当保存其控制范围内的高风险人工智能系统自动生成的日志,保存期限应当与高风险人工智能系统的预期目的相适应,且不少于六个月。 受欧盟金融服务法规定的内部治理、安排或流程要求约束的金融机构部署者,应当将日志作为其根据相关欧盟金融服务法规定保存的文档的一部分进行维护。 7、在工作场所投入使用或使用高风险人工智能系统之前,作为雇主的部署者应当告知员工代表和受影响的员工其将受到高风险人工智能系统使用约束。应当按照欧盟和成员国法律规定的规则和程序以及关于员工及其代表知情权惯例提供该等信息(如适用)。 8、公权力机构或欧盟各机构作为高风险人工智能系统部署者时,应当遵守本法第四十九条所规定的登记义务。当该等部署者发现其计划使用的高风险人工智能系统尚未在本法第七十一条所规定的欧盟数据库中登记时,不得使用该系统,并应当通知(该系统)的提供者或分销方。 9、在适用的情况下,高风险人工智能系统的部署者应当使用根据本法第13条规定提供的信息,履行其根据第2016/679号条例第35条或第16/680号指令第27条进行数据保护影响评估的义务。 10、在不违反第2016/680号指令的情况下,为了有针对性地搜查涉嫌犯罪或被判定有罪的人,部署用于非实时远程生物识别的高风险人工智能系统的部署者应当事先或立即(最迟不晚于48小时)请求拥有决定权并受司法审查的司法机关或行政机关授权其使用该系统,但该系统被用于根据与犯罪直接相关的客观且可核查的事实初步识别潜在犯罪嫌疑人的除外。每一次使用都应当仅限于调查特定刑事犯罪所必需。 如果根据本款前一项规定提出的授权申请被拒绝,应当立即停止使用与所申请授权有关的非实时远程生物识别系统,并删除与所申请授权的高风险人工智能系统的使用有关的个人数据。 无论在任何情况下,上述用于非实时远程生物识别的高风险人工智能系统都不得被用于与刑事犯罪、刑事诉讼、真实且紧迫或真实且可预见的刑事犯罪威胁或搜索特定失踪人员无关的无针对性的执法目的。应当确保执法机关不得仅根据此类非实时远程生物识别系统的输出做出对个人产生不利法律影响的决定。 本款规定不影响欧洲议会和欧盟理事会第2016/679号条例第9条和第16/680号指令第10条关于生物特征数据处理规定的适用。 无论部署目的是什么或部署者是谁,上述高风险人工智能系统的每一次使用都应当被记录在相关警方的档案中,并应当按要求提供给相关市场监管机构和国家数据保护机构,但不包括披露与执法有关的敏感运营数据。本项规定不影响欧洲议会和欧盟理事会第2016/680号指令赋予监管机构的权力。 部署者应当向相关市场监管机构和国家数据保护机构提交年度报告,说明其使用非实时远程生物识别系统的情况,但不包括披露与执法有关的敏感操作数据。上述报告可以汇总涵盖两个或两个部署。 各成员国可以根据欧盟法,对非实时远程生物识别系统的使用制定更严格的法律。 11、在不影响本法第五十条适用的情况下,本法附录三所规定的高风险人工智能系统的部署者在做出或协助做出涉及自然人的决定时,应当告知该等自然人他们将受到高风险人工智能系统使用的影响。对用于执法目的的高风险人工智能系统,应适用欧洲议会和欧盟理事会第2016/680号指令第13条。 12、相关主管机关根据本法规定采取与高风险人工智能系统有关的任何措施时,部署者应当予以配合。 |
Article 27 Fundamental rights impact assessment for high-risk AI systems 1. Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce. For that purpose, deployers shall perform an assessment consisting of: (a)a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose; (b)a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used; (c)the categories of natural persons and groups likely to be affected by its use in the specific context; (d)the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13; (e)a description of the implementation of human oversight measures, according to the instructions for use; (f) the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms. 2. The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in paragraph 1 has changed or is no longer up to date, the deployer shall take the necessary steps to update the information. 3. Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify the market surveillance authority of its results, submitting the filled-out template referred to in paragraph 5 of this Article as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify. 4. If any of the obligations laid down in this Article is already met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 of this Article shall complement that data protection impact assessment. 5. The AI Office shall develop a template for a questionnaire, including through an automated tool, to facilitate deployers in complying with their obligations under this Article in a simplified manner. |
第二十七条 高风险人工智能系统的基本权利影响评估 1、除拟在附录三第2条所列领域使用的高风险人工智能系统外,在部署本法第六条第2款规定的高风险人工智能系统之前,公权力机关或提供公共服务的私营机构部署者,以及附录三第5条(b)款和(c)款所规定的高风险人工智能系统的部署者,应当评估使用该系统可能对基本权利产生的影响。为此,部署者的评估应当包括如下内容: (a)描述按预期目的使用高风险人工智能系统的部署者的程序; (b)描述每个高风险人工智能系统的使用时间和使用频率; (c)在特定情况下可能受其使用影响的自然人和群体的类别; (d)考虑到提供者根据本法第十三条规定提供的信息,可能对本款(c)项项下的自然人或群体类别产生影响的具体损害风险; (e)按使用说明,说明人工监督措施的实施情况; (f)当上述风险具体化时所采取的措施,包括内部治理和投诉机制安排。 2、本条第1款规定的义务适用于高风险人工智能系统的首次使用。在类似情形下,部署者可以依据以前开展的基本权利影响评估或提供者开展的既有影响评估。在使用高风险人工智能系统期间,如果部署者认为本条第1款所规定的任何要素已经发生变化或与现状不符,部署者应当采取必要措施更新信息。 3、部署者完成本条第1款所规定的评估后,应当将评估结果报送市场监管机构,并将本条第5款规定应填写的模板作为报送文件的一部分。发生本法第四十六条第1款所规定情形时,可以免除部署者的报告义务。 4、如果部署者已经通过完成欧洲议会和欧盟理事会第2016/679号条例第35条或第16/680号指令第27条规定的数据保护影响评估履行了本条规定的任何义务,则本条第1款所规定的基本权利影响评估应当作为上述数据保护影响评估的补充。 5、人工智能办公室应当制定一个调查问卷模板(包括利用自动化工具制定),以方便简化部署者履行在本条项下义务。 |
SECTION 4 Notifying authorities and notified bodies |
第四节 通报机构和评定机构 |
Article 28 Notifying authorities 1. Each Member State shall designate or establish at least one notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring. Those procedures shall be developed in cooperation between the notifying authorities of all Member States. 2. Member States may decide that the assessment and monitoring referred to in paragraph 1 is to be carried out by a national accreditation body within the meaning of, and in accordance with, Regulation (EC) No 765/2008. 3. Notifying authorities shall be established, organised and operated in such a way that no conflict of interest arises with conformity assessment bodies, and that the objectivity and impartiality of their activities are safeguarded. 4. Notifying authorities shall be organised in such a way that decisions relating to the notification of conformity assessment bodies are taken by competent persons different from those who carried out the assessment of those bodies. 5. Notifying authorities shall offer or provide neither any activities that conformity assessment bodies perform, nor any consultancy services on a commercial or competitive basis. 6. Notifying authorities shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78. 7. Notifying authorities shall have an adequate number of competent personnel at their disposal for the proper performance of their tasks. Competent personnel shall have the necessary expertise, where applicable, for their function, in fields such as information technologies, AI and law, including the supervision of fundamental rights. |
第二十八条 通报机构 1、每个成员国都应当指定或设立至少一个通报机构,负责制定和执行符合性评定机构的评估、指定和通知及其监督的必要程序。该等程序应当由欧盟所有成员国的通报机构合作制定。 2、各成员国可以决定由欧洲议会和欧盟理事会第765/2008号条例所规定的国家认证机构开展本条第1款所规定的评估和监测。 3、通报机构的设立、组织和运作,不得与符合性评定机构产生利益冲突,并应保障其活动的客观性和公正性。 4、通报机构的组织设立,应当确保作出与符合性评定机构之通知有关的决定的主管机关,不同于承担符合性评定机构评估工作的主管机关。 5、通报机构不得提供符合性评定机构所实施的任何活动,也不得以商业或竞争方式提供任何咨询服务。 6、根据本法第七十八条规定,通报机构应当对其获得的信息保密。 7、为确保按规定执行任务,通报机构应当配备足够的工作人员。工作人员应当掌握履行其职责所必要的信息技术、人工智能和法律等领域专业知识(如适用),包括监督基本权利。 |
Article 29 Application of a conformity assessment body for notification 1. Conformity assessment bodies shall submit an application for notification to the notifying authority of the Member State in which they are established. 2. The application for notification shall be accompanied by a description of the conformity assessment activities, the conformity assessment module or modules and the types of AI systems for which the conformity assessment body claims to be competent, as well as by an accreditation certificate, where one exists, issued by a national accreditation body attesting that the conformity assessment body fulfils the requirements laid down in Article 31. Any valid document related to existing designations of the applicant notified body under any other Union harmonisation legislation shall be added. 3. Where the conformity assessment body concerned cannot provide an accreditation certificate, it shall provide the notifying authority with all the documentary evidence necessary for the verification, recognition and regular monitoring of its compliance with the requirements laid down in Article 31. 4. For notified bodies which are designated under any other Union harmonisation legislation, all documents and certificates linked to those designations may be used to support their designation procedure under this Regulation, as appropriate. The notified body shall update the documentation referred to in paragraphs 2 and 3 of this Article whenever relevant changes occur, in order to enable the authority responsible for notified bodies to monitor and verify continuous compliance with all the requirements laid down in Article 31. |
第二十九条 符合性评定机构的通知申请 1、符合性评定机构应当向其所在成员国的通报机构提交通知申请。 2、通知申请应当附上一份符合性评定机构声明其能够胜任的符合性评定活动、符合性评定模块和人工智能系统类型的说明,以及国家认证机构颁发的认证证书(如有),证明符合性评定机构符合本法第三十一条规定的要求。 在任何欧盟统一立法项下,与评定机构申请人现有指派有关的任何有效文件都应当附上。 3、如果相关符合性评定机构无法提供认证证书,其应当向通报机构提供所有必要的依据文件,以核查、识别并定期监测其是否符合本法第三十一条规定的要求。 4、对于根据其他欧盟统一立法指定的评定机构,与该等指定有关的所有文档和证书可用于支持其在本法项下的指定程序(适当时)。本条第2款和第3款所规定的文件发生变化时,评定机构应当更新,以便负责评定机构的主管机关能够监测和核实其是否持续遵守本法第三十一条规定的所有要求。 |
Article 30 Notification procedure 1. Notifying authorities may notify only conformity assessment bodies which have satisfied the requirements laid down in Article 31. 2. Notifying authorities shall notify the Commission and the other Member States, using the electronic notification tool developed and managed by the Commission, of each conformity assessment body referred to in paragraph 1. 3. The notification referred to in paragraph 2 of this Article shall include full details of the conformity assessment activities, the conformity assessment module or modules, the types of AI systems concerned, and the relevant attestation of competence. Where a notification is not based on an accreditation certificate as referred to in Article 29(2), the notifying authority shall provide the Commission and the other Member States with documentary evidence which attests to the competence of the conformity assessment body and to the arrangements in place to ensure that that body will be monitored regularly and will continue to satisfy the requirements laid down in Article 31. 4. The conformity assessment body concerned may perform the activities of a notified body only where no objections are raised by the Commission or the other Member States within two weeks of a notification by a notifying authority where it includes an accreditation certificate referred to in Article 29(2), or within two months of a notification by the notifying authority where it includes documentary evidence referred to in Article 29(3). Where objections are raised, the Commission shall, without delay, enter into consultations with the relevant Member States and the conformity assessment body. In view thereof, the Commission shall decide whether the authorisation is justified. The Commission shall address its decision to the Member State concerned and to the relevant conformity assessment body. |
第三十条 通知程序 1、通报机构仅通报符合本法第三十一条规定要求的符合性评定机构。 2、通报机构应当使用欧盟委员会开发和管理的电子通报工具,将本条第1款项下所有符合性评定机构的信息通报欧盟委员会和其他成员国。 3、本条第2款所规定的通报应当包括(符合性评定机构的)符合性评定活动、符合性评定模块、相关人工智能系统类型和相关能力证明的所有细节。如果通报并非依据本法第二十九条第2款所规定的认证证书,通报机构应当向欧盟委员会和其他成员国提供依据文件,证明符合性评定机构的能力以及为确保该机构定期受到监测并持续符合本法第三十一条规定的要求所作出的安排。 4、当欧盟委员会或其他成员国在通报机构送达包含本法第二十九条第2款所规定的认证证书的通知后两周内,或者在通报机构送达包含本法第二十九条第3款所规定的文档证据的通知后两个月内没有提出反对意见时,相关的符合性评定机构才可以开展评定机构的活动。 如有欧盟委员会或其他成员国提出异议,欧盟委员会应当立即与相关成员国和符合性评定机构协商。出现上述情形时,由欧盟委员会决定授权是否合理。欧盟委员会应当将其决定告知相关成员国和相关符合性评定机构。 |
Article 31 Requirements relating to notified bodies 1. A notified body shall be established under the national law of a Member State and shall have legal personality. 2. Notified bodies shall satisfy the organisational, quality management, resources and process requirements that are necessary to fulfil their tasks, as well as suitable cybersecurity requirements. 3. The organisational structure, allocation of responsibilities, reporting lines and operation of notified bodies shall ensure confidence in their performance, and in the results of the conformity assessment activities that the notified bodies conduct. 4. Notified bodies shall be independent of the provider of a high-risk AI system in relation to which they perform conformity assessment activities. Notified bodies shall also be independent of any other operator having an economic interest in high-risk AI systems assessed, as well as of any competitors of the provider. This shall not preclude the use of assessed high-risk AI systems that are necessary for the operations of the conformity assessment body, or the use of such high-risk AI systems for personal purposes. 5. Neither a conformity assessment body, its top-level management nor the personnel responsible for carrying out its conformity assessment tasks shall be directly involved in the design, development, marketing or use of high-risk AI systems, nor shall they represent the parties engaged in those activities. They shall not engage in any activity that might conflict with their independence of judgement or integrity in relation to conformity assessment activities for which they are notified. This shall, in particular, apply to consultancy services. 6. Notified bodies shall be organised and operated so as to safeguard the independence, objectivity and impartiality of their activities. Notified bodies shall document and implement a structure and procedures to safeguard impartiality and to promote and apply the principles of impartiality throughout their organisation, personnel and assessment activities. 7.Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies maintain, in accordance with Article 78, the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when its disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out. 8. Notified bodies shall have procedures for the performance of activities which take due account of the size of a provider, the sector in which it operates, its structure, and the degree of complexity of the AI system concerned. 9. Notified bodies shall take out appropriate liability insurance for their conformity assessment activities, unless liability is assumed by the Member State in which they are established in accordance with national law or that Member State is itself directly responsible for the conformity assessment. 10.Notified bodies shall be capable of carrying out all their tasks under this Regulation with the highest degree of professional integrity and the requisite competence in the specific field, whether those tasks are carried out by notified bodies themselves or on their behalf and under their responsibility. 11.Notified bodies shall have sufficient internal competences to be able effectively to evaluate the tasks conducted by external parties on their behalf. The notified body shall have permanent availability of sufficient administrative, technical, legal and scientific personnel who possess experience and knowledge relating to the relevant types of AI systems, data and data computing, and relating to the requirements set out in Section 2. 12.Notified bodies shall participate in coordination activities as referred to in Article 38. They shall also take part directly, or be represented in, European standardisation organisations, or ensure that they are aware and up to date in respect of relevant standards. |
第三十一条 关于评定机构的要求 1、应当根据成员国的国内法设立一个评定机构,且评定机构应当具有法人资格。 2、评定机构应当满足完成其任务所需的组织、质量管理、资源和流程要求,以及适当的网络安全要求。 3、评定机构的组织结构、职责分工、报告流程和运作应当确保对其表现的信任,和对评定机构开展的符合性评定活动之结果的信任。 4、评定机构应当独立于其开展符合性评定活动的高风险人工智能系统的提供者。评定机构也应当独立于在其所评定的高风险人工智能系统中具有经济利益的任何其他运营方,以及提供者的所有竞争对手。上述规定并不排除其使用符合性评定机构运营所需的、经评估的高风险人工智能系统,也不排除其为个人目的使用此类高风险人工智能系统。 5、符合性评定机构、其高层管理人员和负责执行符合性评定任务的人员均不得直接参与高风险人工智能系统的设计、开发、销售或使用,也不得代表参与上述活动的当事方。上述主体不得从事任何可能与其在获通报的合格评定活动中的决策独立性或职业操守相冲突的活动。上述规定尤其适用于咨询服务。 6、评定机构的组织和运行应当确保其活动的独立性、客观性和公正性。评定机构应当记录并贯彻一个结构和程序,以确保公正性,并在其整个组织、人员和评估活动中推广和运用公正原则。 7、评定机构应当具备文件化的程序,确保其工作人员、委员、分支机构、分包方和任何相关机构或外部机构人员均根据本法第七十八条规定对其在开展符合性评定活动中获取的信息保密,但法律要求其披露的除外。评定机构的工作人员有义务对其根据本法规定执行任务时获得的所有信息保密,但与其所开展评定活动所在成员国的通报机构有关的信息除外。 8、评定机构应当有工作流程,流程应适当考虑提供者的规模、所在行业、结构及相关人工智能系统的复杂性。 9、评定机构应当为其开展符合性评定活动购买适当的责任保险,但其所在成员国根据国内法律承担责任或者该成员国本身直接负责符合性评定的除外。 10、评定机构应当具备最大程度的诚信和特定领域的必要能力执行本法规定的所有任务,无论该等任务是由评定机构自行执行还是由其代理人在评定机构承担责任的条件下执行。 11、评定机构应当有充足的内部能力,能够有效地评估外部各主体代表其执行的任务。评定机构应始终拥有足够的管理、技术、法律和科学人员,他们具备与相关类型的人工智能系统、数据和数据计算,以及本章第二节所规定的要求有关的经验和知识。 12、评定机构应当参与本法第三十八条所规定的协调活动。还应当直接或委托代表参加欧洲标准化组织,或者确保其了解并跟进相关标准。 |
Article 32 Presumption of conformity with requirements relating to notified bodies Where a conformity assessment body demonstrates its conformity with the criteria laid down in the relevant harmonised standards or parts thereof, the references of which have been published in the Official Journal of the European Union, it shall be presumed to comply with the requirements set out in Article 31 in so far as the applicable harmonised standards cover those requirements. |
第三十二条 符合评定机构相关要求的推定 如果符合性评定机构证明其符合模型引用已在《欧洲联盟公报》上公布的相关统一标准或其任何部分列明的准则,凡上述可适用的统一标准涵盖了本法第三十一条规定的要求,该符合性评定机构就应当被推定为符合本法第三十一条规定。 |
Article 33 Subsidiaries of notified bodies and subcontracting 1. Where a notified body subcontracts specific tasks connected with the conformity assessment or has recourse to a subsidiary, it shall ensure that the subcontractor or the subsidiary meets the requirements laid down in Article 31, and shall inform the notifying authority accordingly. 2. Notified bodies shall take full responsibility for the tasks performed by any subcontractors or subsidiaries. 3. Activities may be subcontracted or carried out by a subsidiary only with the agreement of the provider. Notified bodies shall make a list of their subsidiaries publicly available. The relevant documents concerning the assessment of the qualifications of the subcontractor or the subsidiary and the work carried out by them under this Regulation shall be kept at the disposal of the notifying authority for a period of five years from the termination date of the subcontracting. |
第三十三条 评定机构的下属机构和分包方 1、如果评定机构将与符合性评定有关的具体工作分包给第三方或委派给其下属机构,其应当确保上述分包方或下属机构符合本法第三十一条规定的要求,并相应告知通报机构。 2、评定机构应当对其分包方或下属机构开展的工作承担全部责任。 3、未经提供者同意,评定机构不得将评定活动委派给分包方或下属机构。评定机构应当公开其下属机构名单。 4、有关分包方或下属机构之资质评估及其根据本法规定开展工作的相关文件,应当由评定机构保管,保管期限自分包终止之日起五年。 |