国产视频一区三区_久久久精品性_在线精品一区_欧美在线视频二区_国产精品一区在线观看_精品不卡视频_欧美亚洲免费高清在线观看_欧美视频久久_国产精品一区毛片_国产日韩一区

申請實習證 兩公律師轉社會律師申請 注銷人員證明申請入口 結業人員實習鑒定表申請入口 網上投稿 《上海律師》 ENGLISH
當前位置: 首頁 >> 業務研究 >> 專業論文

大變局?揭秘全球AI治理立法趨勢

    日期:2025-12-12     作者:韓海嬌(國際法專業委員會、北京煒衡(上海)律師事務所)、黃一埔(北京煒衡(上海)律師事務所)

The launch of ChatGPT has underscored the increasingly greater participation of Generative Artificial Intelligence (“Generative AI” or “AI”) in our daily lives and the consequential necessity of properly regulating the development and application of Generative AI. Trained on large volume of data, Generative AI, tasked to generate answers in response to prompt by the production of text, photos, and audios, is capable of resolving complex tasks and promoting productivity and innovations across sectors. However, AI has also resulted challenges and risks.

ChatGPT的推出凸顯了生成式人工智能(“生成式人工智能”或“人工智能”)在我們日常生活中的日益廣泛參與,因而規范生成式人工智能的發展和應用顯得日益迫在眉睫。生成式人工智能通過對大量數據進行訓練,被賦予以文本、圖片和音頻等方式生成答案的任務,能夠解決復雜問題并推動跨行業的生產力和創新,但挑戰和風險也隨之而來。

I The AI associated risks: why AI must be regulated?

人工智能相關風險:為什么需規范人工智能?

Firstly, AI may undermine privacy by inadvertently and excessively collecting private information and the collected data often exceed what is necessary for the intended uses, which could lead to unintended exposure or misuse Secondly, AI may lead to the spread of misinformation and disinformation. For instance, AI can generate fabricated references to non-existent resources or false contents with a high level of persuasiveness due to the incapability of verification of the contents generated based on training data. Thirdly, AI poses a risk of bias and discrimination. Consequential upon the training on massive volume of biased data containing discriminatory stereotypes that reflect systematic inequalities of the dominant cultures, AI could result in discriminatory treatments against certain marginalised societal groups based on their social backgrounds. Lastly, AI may cause threats to public safety and security. AI systems may be misused to generate inappropriate contents such as pornography, violence, or even incitation of suicide or acts of self-injury, thereby causing risks to public safety. Concurrently, AI could be maliciously utilized for illegal or terrorist activities.

首先,人工智能可能通過無意或過度收集個人信息,且收集的數據往往超出用于預期目的所需范圍,可能導致意外的信息曝光或濫用。其次,人工智能可能導致錯誤信息和虛假信息的傳播。比如,由于生成內容的驗證能力不足,人工智能可能生成說服力雖高但卻指向不存在資源或虛假內容的虛構引用。第三,人工智能存在偏見和歧視的風險。由于對包含反映主導文化系統性不平等的歧視性刻板印象的海量偏倚數據進行訓練,人工智能可能根據社會背景對某些邊緣化社會群體進行歧視性處理。最后,人工智能可能對公共安全和安全性構成威脅。人工智能可能被濫用以生成不當內容,如色情、暴力,甚至煽動自殺或自傷行為,從而對公共安全構成風險。與此同時,人工智能也可能被惡意利用進行非法或恐怖活動。

II The Global Trend of AI Governance

人工智能治理之全球趨勢

At the international level, several international organizations have agreed on numerous initiatives for AI regulation in attempt to regulate AI through developers’ voluntary alignment with certain principles, such as the Bletchley Declaration, Hiroshima AI Principles, and the OECD AI Principles. Notably, the Council of Europe is drafting the world's first AI convention, which, upon entry into force, would oblige contracting states to enact legislation to mandate risk management measures for AI development. These initiatives have substantially reached a consensus on the principles that should govern AI development and application, which include, among others, transparency, fairness, safety, and privacy protection. Under these principles, AI providers must ensure sufficient transparency during the operations of AI system by providing all relevant information in an accessible, clear, accurate, and timely manner, enabling the users to understand on the general functionality, level of accuracy, the associated risks, and the corresponding risk mitigation measures of the AI. The development of AI systems must maintain an adequate level of fairness and respect for rule of law, democratic principles, and human rights. Necessary measures should be adopted to prevent algorithmic discrimination such as equity assessment, utilization of diverse and representative data, safeguarding against demographic features proxies, ensuring accessibility for disabled people, evaluating disparity, and maintaining appropriate human supervision. The developers must conduct the pre-market risk management assessment to identify and mitigate the AI associated risk, adopt cybersecurity measures to ensure the system robustness and stability, and continuously monitor the post-market compliance with the standards. More importantly, regulation on private data protection must be enacted conjunctively to guarantee that the data collection and processing by AI are only permitted under user consent and only to the extent necessary for the intended purposes. However, the unenforceability of these initiatives has rendered the AI substantially unregulated de facto, thereby necessitating the enactment of binding regulation for AI governance.

在國際層面上,多個國際組織已就人工智能監管達成多項倡議,試圖通過開發者自愿遵循特定原則來規范,比如 Bletchley宣言、廣島人工智能原則和OECD 人工智能原則。值得注意的是,歐洲理事會正在起草全球首個人工智能公約,一旦生效,將要求締約國制定立法,強制規定人工智能發展的風險管理措施。這些倡議在管理人工智能發展和應用的原則上達成了實質性共識,其中包括透明度、公平性、安全性和隱私保護等。根據這些原則,人工智能提供者必須確保在人工智能系統運行期間提供所有相關信息,以便用戶以易于獲取、清晰、準確和及時的方式了解人工智能的一般功能、準確度水平、相關風險和相應的風險緩解措施。人工智能系統的開發必須保持適當的公平性和遵守法治、民主原則和人權,必須采取必要措施防止算法歧視,如公平評估、利用多樣化和具有代表性的數據、防止使用人口統計特征代理、確保殘障人士的可訪問性、評估差距并保持適當的人類監督。開發者必須進行市場前風險管理評估,以識別和減輕人工智能相關風險,采取網絡安全措施確保系統的穩健性和穩定性,并持續監測市場后符合標準的情況。更重要的是,必須同時制定關于私人數據保護的法規,以確保人工智能對數據的收集和處理僅在用戶同意的情況下,且僅限于預期目的所需范圍。然而,這些倡議的難以實施使得人工智能在實質上幾乎未受到規范,因此需要制定對人工智能治理具有約束力的法規。

At the domestic level, many jurisdictions have proposed different approaches on AI regulation. Among these, the UK and US have opted to regulate AI within the scope of existing legislation. Pursuant to the UK AI white paper, regulators will be directed to exercise the powers delegated under the existing legislation to issue guidance obliging AI developers to comply with specified principles. Similarly, President Biden has signed an Executive Order mandating developers to conduct safety tests and report the results to the Federal Government and order relevant authorities to issue standards and guidance to monitor AI development’s compliance with the principles specified under the US AI Bill of Rights . However, this approach is apparently flawed due to the inability of existing regulations on addressing specific risks of AI. For instance, while the recent Online Safety Act in the UK could partially ensure the safety of certain internet services by restricting illegal activities and the production of harmful content, these protections are not directly applicable to AI unless the AI is deployed within specified internet services. Conversely, Canada and EU have opted to enact specialized AI legislation. While a Private Members’ Bill for AI regulation has also been introduced to the UK House of Lords by Conservative Lord Holmes of Richmond, this Bill is overly simplified and substantially resembles the current approach of UK Government due to its reliance on relevant authority to enact delegated legislations to regulate AI in accordance with specified principles. More importantly, this Bill is highly unlikely to proceed due to lack of support from the incumbent Conservative government.

在國內層面上,許多司法管轄區已提出了不同的人工智能監管方法。其中,英國和美國選擇在現有立法范圍內對人工智能進行監管。根據英國的人工智能白皮書,監管機構將被指示行使在現有立法下委派的權力,發布指南,要求人工智能開發者遵守特定原則。同樣,拜登總統簽署了一項行政命令,要求開發者進行安全測試并向聯邦政府報告結果,并命令相關機構發布監控人工智能開發符合美國人工智能權利法案指定原則的標準和指南。然而,這種方法顯然存在缺陷,因為現有法規無法解決人工智能的特定風險。比如,盡管最近英國的網絡安全法案可部分確保某些互聯網服務的安全,限制非法活動和有害內容的制作,但除非人工智能部署在特定互聯網服務中,否則這些保護措施并不直接適用于人工智能。相反,加拿大和歐盟選擇制定專門的人工智能立法。盡管保守黨Richmond的Holmes勛爵向英國上議院提出了一項人工智能監管的私人議員法案,但由于該法案過于簡化,且實質上類似于英國政府目前的監管辦法,因此此法案很可能不會獲得支持。更重要的是,由于缺乏現任保守黨政府的支持,這項法案通過的可能性極低。

III The EU AI Act: Benchmark for Global AI Legislation?

歐盟人工智能法案:全球人工智能立法標桿?

The EU is known for its strong stance on digital and data regulation, and it has enacted several key regulations such as GDPR and DSA relevant to AI regulation. The GDPR, as the strictest regulation for the protection of personal information in the world, governs the collection, processing, and transfer of private data across the EU. Under Article 5 of the GDPR, private data can only be collected to the extent necessary for a legitimate purpose, with an appropriate level of accuracy and security, in a lawful, fair, and transparent manner. The lawfulness of data processing is contingent upon the satisfaction of at least one of the purposes specified under Article 6 of GDPR, including obtaining informed consent. Articles 12 to 22 of the GDPR protect the rights of data subjects associated with data processing. Conversely, the DSA, similar to the UK Online Safety Act, ensures the safety of digital services by restricting illegal activities, disinformation, and the production of harmful contents. While the GDPR and DSA could partially address the risks to privacy and safety posed by AI, these regulations face similar issues to the UK Online Safety Act, that is their inapplicability in certain uses of Generative AI and the consequential inability to address specific risks associated with AI. Consequently, the enactment of specialized legislation to regulate AI becomes increasingly imminent.

歐盟以其對數字和數據監管的堅定立場而聞名,并頒布了諸如通用數據保護條例GDPR和數字服務法DSA等與人工智能監管相關的關鍵法規。GDPR作為全球最嚴格的個人信息保護法規,管理著歐盟范圍內私人數據的收集、處理和轉移。根據GDPR第5條,私人數據只能在合法、公正、透明的方式下,僅收集到達到合理目的所需范圍,具有適當的準確性和安全性。數據處理的合法性取決于至少滿足GDPR第6條規定的目的之一,包括獲得知情同意。GDPR第12條至第22條保護與數據處理相關的數據主體的權利。相反,DSA類似于英國的網絡安全法案,通過限制非法活動、虛假信息和有害內容的制作,確保數字服務的安全性。雖然GDPR和DSA在一定程度上可應對人工智能帶來的隱私和安全風險,但這些法規面臨著與英國的網絡安全法案類似的問題,即在某些生成式人工智能的使用方面不適用,從而無法解決與人工智能相關的特定風險。因此,制定專門的法規以規范人工智能變得日益迫切。

In response to this, the European Commission, on 21st April 2021, published a legislative proposal for a Regulation intending to establish harmonised rules for AI regulation with direct applicability across the EU, which, upon its entry into force, would be the world’s first specialised legislation for AI regulation. The European Commission proposed to regulate Generative AI through a tiered and risk-based approach to ensure that regulated AI systems are subject to rules that are proportional to their associated risks. AI systems are categorised into four classes of risk based on their intended uses, with each class subject to different regulatory obligations. On 8th December 2023, the EU AI Act has received its final approval and will enter into force in early 2024. The final version of EU AI Act retained the tiered and risk-based approach proposed by the European Commission.

作為回應,歐洲委員會于2021年4月21日發布了一項立法建議,旨在建立統一的人工智能監管規則,該規則在歐盟范圍內直接適用,一旦生效將成為全球首個專門針對人工智能監管的立法。歐洲委員會建議通過分層和基于風險的方式來規范生成式人工智能,以確保受監管的人工智能系統受到與其相關風險相稱的規則約束。根據其預期用途,人工智能系統分為四類風險,并對每一類都施加不同的監管義務。2023年12月8日,歐盟人工智能法案獲得最終批準,并將于2024年初生效。歐盟人工智能法案最終版本保留了歐洲委員會提出的分層和基于風險的方法。

1) Unacceptable-risk AI: Prohibition

不可接受風險的人工智能:禁止

The European Commission proposed to prohibit certain AI applications that pose unacceptable risks such as behavioral distortion or manipulation, biometric categorization, social scoring by public authorities, and biometric identification by law enforcement unless necessary for crime prevention. The European Parliament subsequently proposed to expand the scope of prohibition to include any deceptive techniques that may undermine users’ ability to make informed decisions, and AI applications for social scoring, emotion inference, and all biometric identification practices. It is confirmed that the final approved version has extended the list of prohibited “unacceptable risk AI” to encompass the amendments adopted by the European Parliament, with the exception allowing law enforcement to apply remote biometric identification under appropriate safeguards.

歐洲委員會提議禁止某些人工智能應用,這些應用存在不可接受的風險,如行為扭曲或操縱、生物特征分類、公共機構進行社會評分,以及執法機構進行生物特征識別,除非為了犯罪預防。隨后,歐洲議會提議擴大禁止范圍,包括任何可能削弱用戶做出知情決策能力的欺騙性技術,以及用于社會評分、情緒推斷和所有生物特征識別實踐的人工智能應用。確認最終通過的版本已經擴展了被禁止的“不可接受的風險”列表,涵蓋了歐洲議會所采納的修改,但執法機構在適當保障下使用遠程生物特征識別除外。

2) High-risk AI: Pre-market conformity assessment and post-market monitoring

高風險的人工智能:市場前符合評估和市場后監測

The second class of AI application, termed the “high-risk AI”, is subject to detailed conformity assessment and post-market monitoring requirements instead of prohibition. Under the initial proposal, the AI applications specified under Annex III are classified as “High-Risk”, such as critical infrastructure management, educational training, recruitment & employee management, critical private or public services, migration control, administration of justice, and certain law enforcement systems. The providers of high-risk AI are subject to numerous obligations, including conducting conformity assessment to ensure the compliance with the requirements specified under Title III Chapter 2 of EU AI Act.

第二類人工智能應用被稱為“高風險”而非被禁止,需受到詳細的市場前符合評估和市場后監測要求的約束。根據最初的提案,列入附件III的人工智能應用被分類為“高風險”,如關鍵基礎設施管理、教育培訓、招聘和員工管理、關鍵的私人或公共服務、移民控制、司法管理和某些執法系統。高風險人工智能的提供者需履行諸多義務,包括進行符合評估,以確保符合歐盟人工智能法案第三章第二節規定的要求。

3) Transparency obligations for limited risk AI

低風險的人工智能之透明度義務

Certain limited-risk AI capable of generating or modifying images, audio, or video content must ensure sufficient transparency by notifying users that the contents are AI-generated. Limited-risk AI may include deepfakes or chatbots. The European Parliament further proposed to oblige the providers of limited-risk AI to disclose the functionality of the AI systems, the identity of the provider, and availability of human oversight to the users.

對于低風險的人工智能,能夠生成或修改圖像、音頻或視頻內容,必須確保充分的透明度,即告知用戶這些內容是由人工智能生成。低風險的人工智能可能包括 deepfake 或聊天機器人。歐洲議會進一步提議,要求低風險的人工智能提供者披露人工智能系統的功能、提供者的身份以及用戶是否可獲得人類監督。

4) Voluntary code of conduct for minimal-risk AI

最低風險的人工智能之自愿行為準則

The providers of minimal-risk AI are encouraged to develop Code of Conduct and voluntarily align with the conformity assessment requirements specified Title III Chapter 2 of EU AI Act.

鼓勵最低風險的人工智能提供者制定行為準則,并自愿遵守歐盟人工智能法案第三章第二節規定的符合評估要求。

5) Governance

治理

Similar to the European Data Protection Board established under GDPR, the European Commission proposed a European Artificial Intelligence Board (“AI Board”) to provide issue recommendations on technical specification, standards, or the implementation of EU AI Act. This body is proposed to be comprised of the relevant authorities from the member states and the European Data Protection Supervisor. While the Council of EU supports this composition, the European Parliament has proposed an alternative: a fully independent AI governance body named “AI Office”. It has been confirmed that both proposed entities are retained by the final version of EU AI Act, with AI Office serving as an enforcement body and AI Board functioning as an advisory body.

類似于GDPR項下建立的歐洲數據保護委員會,歐洲委員會提議設立一個歐洲人工智能委員會(“人工智能委員會”),以就技術規范、標準或歐盟人工智能法案的實施發布建議。這一機構擬由成員國的相關權威機構和歐洲數據保護監督員組成。雖然歐盟理事會支持這種構成,但歐洲議會提出了一種替代方案:一個完全獨立的人工智能治理機構,名為“人工智能辦公室”。已確認歐盟人工智能法案的最終版本將保留這兩個提議的實體,其中人工智能辦公室作為執法機構,而人工智能委員會則作為咨詢機構。

6) Regulating foundation models and general-purposes AI: limitation on innovation and competitiveness?

規范基礎模型和通用型人工智能:對創新和競爭力的限制?

One significant concern of the initial proposal is its failure to account for AI systems designed for a generality of outputs to serve various applications either through direct use or incorporation into other AI systems. Such AI systems, commonly referred to as “foundation models” or “general-purpose AI”, cannot be classified into any of the risk tiers due to the absence of a specific intended use, thereby rendering them substantially unregulated under the initial proposal. To address this issue, the Council of EU has proposed a new Title IA specifically designed to regulate general-purposes AI that may be used for high-risk purposes to comply with the conformity assessment requirements. Conversely, the European Parliament proposed a new Article 28b imposing horizontal obligations on all foundation models, including adopting risk management system, training on appropriately governed datasets to avoid bias and discrimination, and adherence to the transparency obligations under Article 52 of the EU AI Act.

最初提案的一個重要問題是它未能考慮到為多種應用提供服務的、通過直接使用或并入其他人工智能系統的通用輸出的人工智能系統。這種人工智能系統通常被稱為“基礎模型”或“通用型人工智能”,由于缺乏特定的預期用途,無法被分類為任何風險等級,因此在最初的提案下幾乎未能受到實質性的監管。為解決該問題,歐盟理事會提出了一個新的人工智能章節,專門用于規范可能被用于高風險目的的通用型人工智能以符合符合評估要求。相反,歐洲議會提出了一項新的28b條款,對所有基礎模型施加了橫向義務,包括采用風險管理系統、在受到適當監督的數據集上進行訓練以避免偏見和歧視,并遵守歐盟人工智能法案第52條透明度義務。

Following the negotiations, EU legislators initially rejected horizonal rules and agreed on a similar tiered approach on 24th October 2023 to regulate foundation models based on their level of risks. However, on 18th November 2023, three major economies in the EU - Germany, France, and Italy - opted against binding regulation on foundation models citing concerns over potential deterrent effect on innovation and competition, and jointly supported self-regulation through code of conduct. Despite this controversy is resolved by the final version of the EU AI Act’s introduction of horizontal transparency obligations for foundation models and stricter rules for “high-impact” foundation models, a potential disadvantage of overly stringent AI regulation has been highlighted: the additional compliance costs may undermine the competitiveness of AI sector.

在談判過程中,歐盟立法者最初拒絕橫向規則,并于2023年10月24日同意采取類似的分層方法,根據基礎模型風險級別進行規范。然而,2023年11月18日,歐盟的三個主要經濟體德國、法國和意大利決定反對對基礎模型進行約束性的規范,理由是擔心可能對創新和競爭力產生限制效應,并聯合支持通過行為準則進行自我規范。盡管歐盟人工智能法案的最終版本通過引入針對基礎模型的橫向透明度義務和對“高影響力”基礎模型制定了更嚴格的規則解決了這一爭議,但也突顯出了過度嚴格的人工智能監管的一個潛在劣勢:額外的合規成本可能削弱人工智能行業的競爭力。

7) Final approved version of EU AI Act and the material modifications.

歐盟人工智能法案最終批準版本及實質性修改

On 8th December 2023, the EU legislators have approved the final compromise text of the EU AI Act, which, despite the substantial consistency with the initial proposal, has adopted some material modifications including the extension of the list of prohibited AI practices (biometric identification, emotion inference, social scoring, behavioural manipulation), horizontal transparency obligations for foundation models and stricter rules for “high-impact” foundation models/general-purpose AI, and retention of “AI Office” proposed by European Parliament as a supplement to “AI Board” proposed by the European Commission. In addition, the final version of EU AI Act amended the initial proposal to oblige certain public entities to register the applications of high-risk AI systems with regulators. Following this provisional agreement, the EU AI Act will be finalised promptly and enter into force in early 2024.

2023年12月8日,歐盟立法者批準了歐盟人工智能法案最終妥協文本,盡管與最初提案在很大程度上保持一致,但采納了一些實質性修改,包括擴展了禁止的人工智能實踐列表(生物特征識別、情緒推斷、社會評分、行為操縱)、針對基礎模型的橫向透明度義務以及更嚴格的規則適用于“高影響力”基礎模型/通用型人工智能,并保留了歐洲議會提出的“人工智能辦公室”作為歐洲委員會提出的“人工智能委員會”之補充。此外,歐盟人工智能法案最終版本修正了最初提案,要求某些公共機構向監管機構注冊高風險的人工智能系統應用。在達成這項臨時協議后,歐盟人工智能法案將盡快完成起草工作,并于2024年初生效。

8) The ‘Brussel Effect’ and the EU AI Act’s potential influence on global AI governance

“布魯塞爾效應”及歐盟人工智能法案對全球人工智能治理的潛在影響

The ‘Brussels Effect’ is generally referred to the global applicability of EU’s regulations and standards. By leveraging its large market size, the EU often adopt high standards in various areas including digital technologies and data privacy. When multinational corporations operate in the EU market, applying the highest standard globally is generally more practical than maintaining various standards in different regions due to the high cost of differentiation. As a notable example is the GDPR, which applies to overseas entities that collect EU citizens’ data, thereby forcing major multinational corporations especially tech giants to adhere to the GDPR globally. This worldwide adoption of GDPR has also encouraged other regions to enact similar legislation, such as the Personal Information Protection Law of China and California Consumer Privacy Act. Similarly, the EU’s Common Charger Directive which obliges all electronic devices sold in EU to adopt USB-C chargers have forced Apple to completely abandon its own Lightning charger for iPhone. As the EU AI Act, upon its entry into force, is set to become the strictest AI regulation globally, a similar Brussels Effect is likely to occur, forcing AI systems such as ChatGPT or Bard, which operate globally, to universally apply EU AI Act. This global applicability could render the EU AI Act as de facto international standard for AI governance and substantially influence the future Australian AI regulations.

“布魯塞爾效應“通常是指歐盟法規和標準的全球適用性。歐盟借助其龐大的市場規模,在包括數字技術和數據隱私在內的各個領域通常采用高標準。當跨國公司在歐盟市場開展業務時,在全球范圍內應用最高標準通常比在不同地區維持多種標準更為實際,因為區分化的成本較高。一個顯著的例子是GDPR,適用于收集歐盟公民數據的海外實體,因此迫使主要跨國公司尤其是科技巨頭全球遵守GDPR。GDPR的全球采納也鼓勵其他地區出臺類似立法,如中國《個人信息保護法》和美國加州消費者隱私法案。同樣,歐盟通用充電器指令要求在歐盟銷售的所有電子設備都采用USB-C充電器,迫使蘋果公司完全放棄了其自身的Lightning充電器用于iPhone。隨著歐盟人工智能法案的生效,預計將成為全球最嚴格的人工智能監管,類似的“布魯塞爾效應”可能發生,迫使全球運營的人工智能系統,如ChatGPT或Bard,在全球范圍內普遍遵守歐盟人工智能法案。這種全球適用性可能將歐盟人工智能法案視為事實上的國際人工智能治理標準,并對未來的澳大利亞人工智能法規產生重大影響。

IV Canadian Artificial Intelligence and Data Act (“AIDA”):

more suitable approach for Australia AI legislation?

加拿大人工智能和數據法案(AIDA):對澳大利亞人工智能立法更合適的方法?

In June 2022, the Canadian Government introduced the AIDA to the Canadian House of Commons. Under Sections 6 to 12 of this Bill, high-impact AI providers are subject to the self-assessment obligations of adopting mandatory risk mitigation measures, keeping records, providing notifications to the users about the intended uses, types of generated contents, and risk mitigation measures. The providers must report any potential “material harm” of the high-impact AI The responsible Minister may inspect records, order a mandatory audit, or even prohibit the deployment of a specific AI system, if there is a reasonable belief that the AI may produce harmful or “biased output”, infringe Sections 6 to 12, or cause imminent harm.

2022年6月,加拿大政府向加拿大下議院提出了人工智能與數據法案AIDA。根據該法案第6至12節,高影響力的人工智能提供者需承擔自我評估義務,采用強制性風險緩解措施,保存記錄,并向用戶提供關于預期用途、生成內容類型和風險緩解措施的通知。提供者必須報告高影響力人工智能的任何潛在“重大損害”。如有合理理由相信人工智能可能產生有害或“偏見輸出”、侵犯第6至12節或造成即將發生的損害,負責的部長可檢查記錄,下令進行強制審計,甚至禁止特定人工智能系統的部署。

One significant issue of AIDA is that the obligations for high-impact AI under AIDA are less comprehensive than those for high-risk AI under the EU AI Act. Additionally, compliance under AIDA is ensured by self-assessment rather than conformity assessment by an authorized body. Nevertheless, the AIDA model could be a more suitable approach for future Australian AI legislations for several reasons. Unlike the rigid tiered approach under the EU AI Act, AIDA provides the Canadian Government broad discretion on the enforcement by defining key terms such as “biased outputs”, “high-impact AI”, and “material harm”, and establishing risk mitigation measures and penalties. The Minister may order mandatory audit and prohibit the deployment of a specific AI system based on its potential risks. Without burdensome parliamentary scrutiny, this regulatory flexibility could enable continuous evaluation of AI associated risks and accelerate the decision-making process to develop suitable standards for diverse AI applications in a timely manner. Furthermore, unlike EU AI Act which limits penalties to administrative fines, the AIDA imposes criminal liabilities as penalties for severe infringement, potentially ensuring a higher level of compliance by stronger deterrent. This approach could offer the level of standards comparable to the conformity assessment under the EU AI Act, but with potentially reduced compliance costs due to its reliance on self-assessment.

AIDA的一個重要問題在于,其對高影響力人工智能的義務比歐盟人工智能法案對高風險人工智能的要求不夠全面。此外,AIDA項下合規性是通過自我評估而非由授權機構進行符合評估來確保的。然而,出于幾個原因,AIDA模式可能是未來澳大利亞人工智能立法的更合適方法。與歐盟人工智能法案項下嚴格分層方法不同,AIDA通過定義關鍵術語如“偏見輸出”、“高影響力人工智能”和“重大損害”,并設立風險緩解措施和處罰,賦予了加拿大政府廣泛的執行裁量權。部長可基于人工智能的潛在風險下令進行強制審計并禁止特定人工智能系統的部署。在無繁瑣的議會審查情況下,這種監管靈活性可持續評估人工智能相關風險,并加快制定適用于多種人工智能應用的合適標準的決策過程。此外,與限制處罰為行政罰款的歐盟人工智能法案不同,AIDA將刑事責任作為嚴重違規的處罰,可能通過更強有力的威懾確保更高水平的合規性。這種方法可能提供與歐盟人工智能法案項下符合評估相媲美的標準水平,但由于依賴自我評估,合規成本可能會降低。

V Bibliography

參考文獻

1) Legislation/立法

CA Civ Code § 1798.100 (2018).

Directive (EU) 2022/2380 of the European Parliament and of the Council of 23 November 2022 amending Directive 2014/53/EU on the harmonization of the laws of the Member States relating to the making available on the market of radio equipment.

Online Safety Act 2023 (UK).

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC.

Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC.

《中華人民共和國個人信息保護法》[Personal Information Protection Law of the People’s Republic of China] (People’s Republic of China), National People’s Congress, Order No.91/2021, 20th August 2021.

2) Other Legislative Materials/其他立法文件

Artificial Intelligence (Regulation) HL Bill (2023-24) 11 (UK).

Bill C-27, Digital Charter Implementation Act, 1st Sess, 44th Parl, 2022, pt 3 (Canada) .

European Union, European Commission, ‘Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ COM (2021) 206 final, 21 April 2021 .

European Union, Council of the European Union, ‘General approach adopted by the Council of the European Union on 25 November 2022 on the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative’, 25 November 2022 .

European Union, European Parliament, ‘Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’, 14 June 2023 .

3) Draft treaty/草案條約

Council of Europe, Committee on Artificial Intelligence, ‘Consolidated working draft of the framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law’ CAI (2023)18, 7 July 2023 .

4) Secondary Resources/其他資源

Anu Bradford, ‘The Brussels Effect’ (2012) 107(1) Northwestern University Law Review.

AI Safety Summit, ‘The Bletchley Declaration by Countries Attending the AI Safety Summit 1-2 November 2023’ (1 November 2023) .

BBC, ‘ChatGPT banned in Italy over privacy concerns’ (Web Page, 1 April 2023) .

Council of the European Union, ‘Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world’ (Web Page, 9 December 2023) .

Charlotte Siegmann and Markus Anderljung, ‘The Brussels Effect and Artificial Intelligence: How EU regulation will impact the global AI market’ (Cambridge University Press, 2022).

Department for Science, Innovation and Technology (UK), ‘A pro-innovation approach to AI regulation’ (2023) .

Department for Science, Innovation and Technology (UK), ‘Capabilities and risks from frontier AI: A discussion paper on the need for further research into AI risk’ (2023) .

Lilian Edwards, ‘Expert explainer: The EU AI Act proposal’, Ada Lovelace Institute (Web Page, 8 April 2022) .

Organisation for Economic Co-operation and Development, ‘What are the OECD Principles on AI?’ (2020) .

Science, Innovation and Technology Committee, Parliament of the United Kingdom, ‘The governance of artificial intelligence: interim report’ (Ninth Report of Session 2022-23, 31 August 2023) .

The Whitehouse, ‘Blueprint for an AI Bill of Rights: making automated systems work for the American people’ (2022) .

The Whitehouse, ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’ (Web Page, 30 October 2023) .

The Group of Seven, ‘Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI system’ (30 October 2023) .

The Group of Seven, ‘Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI system’ (30 October 2023) .

Tech Policy Press, ‘Will Disagreement Over Foundation Models Put the EU AI Act at Risk?’ (Web Page, 30 November 2023) .

Reuters, ‘Exclusive: Germany, France and Italy reach agreement on future AI regulation’ (Web Page, 21 November 2023) .



[版權聲明] 滬ICP備17030485號-1 

滬公網安備 31010402007129號

技術服務:上海同道信息技術有限公司   

     技術電話:400-052-9602(9:00-11:30,13:30-17:30)

 技術支持郵箱 :12345@homolo.com

上海市律師協會版權所有 ?2017-2024


国产视频一区三区_久久久精品性_在线精品一区_欧美在线视频二区_国产精品一区在线观看_精品不卡视频_欧美亚洲免费高清在线观看_欧美视频久久_国产精品一区毛片_国产日韩一区
欧美成人免费在线| 欧美在线日韩精品| 久久久xxx| 久久精品三级| 黄色国产精品| 亚洲少妇自拍| 欧美精品一区二区三区在线看午夜 | 久久久久久亚洲精品杨幂换脸 | 久久五月天婷婷| 欧美三级网页| 在线一区欧美| 欧美三级在线| 国产一区二区三区久久久久久久久| 国产精品制服诱惑| 国模精品娜娜一二三区| 国产精品乱码一区二区三区| 午夜久久一区| 国产亚洲激情| 一区国产精品| 欧美喷水视频| 免费日韩视频| 国产欧美日韩视频一区二区三区| 女人色偷偷aa久久天堂| 国产欧美亚洲一区| 精品1区2区3区4区| 久久亚洲国产精品一区二区| 亚洲韩日在线| 欧美日韩另类丝袜其他| 亚洲欧美日韩在线综合| 日韩香蕉视频| 亚洲欧洲一二三| 欧美午夜精品| 欧美理论在线| 欧美日韩hd| 久久久人人人| 久久亚洲精品欧美| 久久久综合香蕉尹人综合网| 国产精品视频久久一区| 在线一区免费观看| 中文国产一区| 夜夜嗨网站十八久久| 亚洲啪啪91| 亚洲高清激情| 日韩一区二区久久| 国产欧美日韩亚洲| 亚洲制服av| 美女日韩在线中文字幕| 国产亚洲欧美一区二区三区| 亚洲黄色成人| 99视频一区| 国产精品日本| 欧美亚洲视频| 欧美成人69| 欧美特黄a级高清免费大片a级| 欧美成人首页| 黄色一区三区| 最新国产乱人伦偷精品免费网站| 亚洲性图久久| 中文亚洲欧美| 香蕉亚洲视频| 欧美精品尤物在线| 极品少妇一区二区三区| 中国成人亚色综合网站| 亚洲欧美清纯在线制服| 玖玖玖国产精品| 欧美午夜在线视频| 日韩天天综合| 久久久蜜桃一区二区人| 欧美亚洲不卡| 亚洲一区二区三区涩| 久色成人在线| 亚洲裸体俱乐部裸体舞表演av| 国产日韩亚洲| 欧美日韩亚洲国产精品| 99精品99久久久久久宅男| 亚洲欧美日韩国产综合精品二区| 欧美黄色一区| 最新国产乱人伦偷精品免费网站| 校园春色综合网| 国产一区自拍视频| 国产伦理一区| 亚洲小说欧美另类社区| 国产精品一区免费观看| 欧美日韩爆操| 亚洲一区二区三区高清不卡| 国产综合网站| 久久婷婷麻豆| 国产三级精品在线不卡| 欧美日本韩国一区二区三区| 一区二区精品在线观看| 欧美午夜电影在线观看| 亚洲一区三区在线观看| 精品99视频| 久久一区二区三区av| 日韩午夜av在线| 欧美日韩高清在线一区| 亚洲欧美高清| 中国女人久久久| 国内精品福利| 欧美精品啪啪| 欧美激情第二页| 亚洲资源av| 99国产精品99久久久久久粉嫩| 欧美日韩国产欧| 久久综合图片| 久久午夜精品一区二区| 国产精品亚洲一区| 国产亚洲一级| 99精品国产在热久久婷婷| 国内在线观看一区二区三区| 欧美高清视频一区二区三区在线观看| 亚洲一区观看| 老司机一区二区三区| 国产精品一区在线播放| 亚洲一区二区三区在线观看视频 | 欧美一区二区在线| 亚洲欧美日韩一区在线观看| 在线亚洲精品| 国产精品日韩久久久| 一本色道久久综合亚洲精品不| 在线观看福利一区| 欧美精品网站| 国内精品久久久久国产盗摄免费观看完整版| 久久一综合视频| 久久婷婷一区| 欧美一区二区三区在线免费观看| 久久九九免费| 欧美日韩午夜| 亚洲伦理精品| 鲁大师影院一区二区三区| 亚洲欧美日产图| 老司机精品福利视频| 欧美日韩国产精品一区二区亚洲| 国产精品二区二区三区| 在线观看成人av电影| 国产欧美日本在线| 美女久久一区| 欧美少妇一区| 在线亚洲精品| 欧美成熟视频| 亚洲高清成人| 亚久久调教视频| 欧美日韩国产精品一区二区亚洲| 国产精品分类| 国产情侣一区| 午夜日韩福利| 亚洲精一区二区三区| 亚洲一区中文| 国产精品成人一区二区网站软件| 亚洲日本激情| 久久久久久久高潮| 亚洲欧洲在线一区| 久久激情网站| 亚洲精品男同| 欧美在线日韩精品| 在线亚洲美日韩| 欧美精品成人| 香蕉亚洲视频| 亚洲精品一品区二品区三品区| 久久精品女人的天堂av| 亚洲国产精品久久久久久女王| 一区二区三区国产在线| 欧美~级网站不卡| 国产人成精品一区二区三| 欧美日韩一区二区三区免费| 国产一区导航| 91久久亚洲| 国内精品久久久久久久影视蜜臀 | 国产精品啊啊啊| 新67194成人永久网站| 伊人影院久久| 国产精品高清一区二区三区| 久久久久久穴| 亚洲一区精彩视频| 亚洲黄色免费| 国产精品mv在线观看| 免费久久99精品国产自| 99亚洲一区二区| 一区免费视频| 欧美三区在线| 欧美日本二区| 午夜精品剧场| 午夜精品久久久久99热蜜桃导演| 免费在线日韩av| 久久福利一区| 久久精品官网| 亚洲免费网址| 久久福利影视| 久久人人97超碰国产公开结果| 男人的天堂成人在线| 免费视频一区二区三区在线观看| 国产欧美日韩综合一区在线播放 | 午夜视频一区| 午夜久久黄色| 午夜精品久久| 激情成人亚洲| 最新国产乱人伦偷精品免费网站| 亚洲特级毛片| 亚洲区国产区| 国产精品手机在线| 久久久噜噜噜久久狠狠50岁| 久久久久久久久一区二区| 麻豆九一精品爱看视频在线观看免费| 国产毛片久久| 欧美韩日精品| 最新国产乱人伦偷精品免费网站| 亚洲美女黄网| 久久精选视频| 亚洲视频中文| 国产偷自视频区视频一区二区| 国产精品最新自拍| 欧美凹凸一区二区三区视频| 欧美涩涩网站| 一区二区三区四区五区精品| 久久国产精品亚洲va麻豆| 国产精品大全| 国产三区二区一区久久| 欧美一区二区三区四区在线观看地址| 欧美日本韩国在线| 99精品热6080yy久久| 亚洲欧美精品| 影音先锋亚洲一区| 免费看亚洲片| 亚洲片区在线| 欧美成人综合| 国产日本精品| 国产一区美女| 麻豆久久精品| 亚洲乱码视频| 欧美涩涩视频| 久久经典综合| 国产欧美亚洲日本| 国产一区日韩欧美| 久久精品123| 中文精品视频| 激情一区二区三区| 老牛嫩草一区二区三区日本| 亚洲国产免费看| 久久午夜影视| 国产精品综合| 在线视频观看日韩| 欧美日韩综合久久| 欧美一区二区三区久久精品茉莉花| 在线免费观看一区二区三区| 久久亚洲影院| 国产精品一卡| 亚洲二区精品| 国内精品**久久毛片app| 免费看黄裸体一级大秀欧美| 日韩视频在线观看国产| 国产在线精品一区二区中文| 久久久一本精品99久久精品66| av不卡在线看| 亚洲精品在线观看免费| 欧美午夜在线| 欧美精品一区在线| 老司机精品视频网站| 亚洲免费在线精品一区| 国产视频一区三区| 亚洲第一精品影视| 尤物网精品视频| 激情一区二区| 亚洲高清激情| 亚洲伦理一区| 一区二区三区国产在线| 亚洲乱码视频| 亚洲看片免费| 国产精品久久波多野结衣| 亚洲乱码视频| 国产精品毛片一区二区三区| 国产精品裸体一区二区三区| 国产精品夜夜夜一区二区三区尤| 一本色道久久99精品综合| 99热免费精品| 亚洲女同在线| 欧美国产91| 精品91在线| 中文在线不卡| 久久精品国产第一区二区三区最新章节 | 午夜激情一区| 韩国av一区| 在线观看视频日韩| 一本色道精品久久一区二区三区| 在线一区免费观看| 久久精品综合| 黄色成人av网站| 日韩一级在线| 国产精品视频| 欧美.www| 亚洲理伦在线| 欧美/亚洲一区| 亚洲高清精品中出| 免费亚洲一区二区| 欧美午夜视频在线| 亚洲色诱最新| 午夜久久99| 在线综合视频| 欧美日韩在线一区二区三区| 日韩亚洲视频在线| 亚洲欧美一级二级三级| 日韩视频三区| 欧美激情性爽国产精品17p| 亚洲国产精品一区在线观看不卡 | 久久久国产精品一区二区三区| 你懂的网址国产 欧美| 黄色av成人| 久久精品国产清高在天天线| 亚洲一二区在线| 性伦欧美刺激片在线观看| 国内精品一区二区| 久久狠狠婷婷| 99精品福利视频| 国产精品国产一区二区 | 久久精品天堂| 亚洲福利av| 欧美一区二区三区在线免费观看| 亚洲日本成人| 国产精品大片| 欧美一区二区三区四区夜夜大片| 99视频国产精品免费观看| 欧美日韩国产精品一卡| 亚洲综合不卡| 一区二区精品在线| 亚洲黄色一区二区三区| 欧美一区国产在线| 亚洲男人影院| 中文网丁香综合网| 在线精品在线| 国产中文一区| 欧美高清视频一区| 久久久夜夜夜| 久久精品亚洲| 久久福利精品| 乱码第一页成人| 国产精品女主播一区二区三区 | 久久婷婷久久| 国产精品一区在线观看| 99热这里只有成人精品国产| 亚洲成人自拍视频| 亚洲精品123区| 亚洲美女少妇无套啪啪呻吟| 亚洲黄色大片| 亚洲精品美女91| 中文亚洲欧美| 国产伦精品一区二区三区视频黑人| 亚洲美女啪啪| 亚洲一区二区高清视频| 在线视频欧美一区| 国产精品一区二区三区免费观看 | 久久国产一二区| 久久久精品网| 久久一日本道色综合久久| 久久尤物视频| 国产精品多人| 亚洲麻豆视频| 亚洲欧美久久久久一区二区三区| 国产精品视区| 久久蜜桃精品| 黄色综合网站| 99综合视频| 久久久久久亚洲精品不卡4k岛国| 欧美一区二视频在线免费观看| 欧美91大片| 亚洲国产清纯| 亚洲欧美电影在线观看| 久久久久久九九九九| 欧美日韩a区| 亚洲日本激情| 米奇777在线欧美播放| 国产精品国产亚洲精品看不卡15| 在线欧美不卡| 亚洲一区观看| 激情自拍一区| 免费看黄裸体一级大秀欧美| 国产综合色一区二区三区| 亚洲久久一区二区| 欧美在线不卡| 一区二区三区视频在线播放| 久久青青草原一区二区| 亚洲国产国产亚洲一二三| 午夜一区二区三视频在线观看 | 亚洲欧美一区在线| 亚洲二区三区四区| 麻豆av福利av久久av| 国产在线不卡| 久久国产毛片| 99www免费人成精品| 欧美另类综合| 免费不卡亚洲欧美| 亚洲精品在线观看免费| 午夜精品亚洲| 免费视频一区| 在线亚洲自拍| 伊人久久大香线蕉综合热线 | 亚洲看片网站| 国内精品久久久久久久97牛牛| 亚洲免费中文| 国产欧美二区| 亚洲精品无人区| 黄色国产精品|