| Last Version Text |
<?xml version="1.0" ?>
<ns0:MeasureDoc xmlns:html="http://www.w3.org/1999/xhtml" xmlns:ns0="http://lc.ca.gov/legalservices/schemas/caml.1#" xmlns:ns3="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.0" xsi:schemaLocation="http://lc.ca.gov/legalservices/schemas/caml.1# xca.1.xsd">
<ns0:Description>
<ns0:Id>20250SB__081397AMD</ns0:Id>
<ns0:VersionNum>97</ns0:VersionNum>
<ns0:History>
<ns0:Action>
<ns0:ActionText>INTRODUCED</ns0:ActionText>
<ns0:ActionDate>2025-02-21</ns0:ActionDate>
</ns0:Action>
<ns0:Action>
<ns0:ActionText>AMENDED_SENATE</ns0:ActionText>
<ns0:ActionDate>2025-03-26</ns0:ActionDate>
</ns0:Action>
<ns0:Action>
<ns0:ActionText>AMENDED_SENATE</ns0:ActionText>
<ns0:ActionDate>2025-05-01</ns0:ActionDate>
</ns0:Action>
</ns0:History>
<ns0:LegislativeInfo>
<ns0:SessionYear>2025</ns0:SessionYear>
<ns0:SessionNum>0</ns0:SessionNum>
<ns0:MeasureType>SB</ns0:MeasureType>
<ns0:MeasureNum>813</ns0:MeasureNum>
<ns0:MeasureState>AMD</ns0:MeasureState>
</ns0:LegislativeInfo>
<ns0:AuthorText authorType="LEAD_AUTHOR">Introduced by Senator McNerney</ns0:AuthorText>
<ns0:Authors>
<ns0:Legislator>
<ns0:Contribution>LEAD_AUTHOR</ns0:Contribution>
<ns0:House>SENATE</ns0:House>
<ns0:Name>McNerney</ns0:Name>
</ns0:Legislator>
</ns0:Authors>
<ns0:Title>An act to add Chapter 14 (commencing with Section 8898) to Division 1 of Title 2 of the Government Code, relating to artificial intelligence.</ns0:Title>
<ns0:RelatingClause>artificial intelligence</ns0:RelatingClause>
<ns0:GeneralSubject>
<ns0:Subject>Multistakeholder regulatory organizations.</ns0:Subject>
</ns0:GeneralSubject>
<ns0:DigestText>
<html:p>Existing law requires, on or before September 1, 2024, the Department of Technology to conduct, in coordination with other interagency bodies as it deems appropriate, a comprehensive inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, any state agency. The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence (GenAI) system to offer the user the option to include a manifest disclosure in image, video, or audio content, or content that is any combination thereof, created or altered by the covered provider’s GenAI system that, among other things, identifies content as AI-generated content.</html:p>
<html:p>This bill would establish a process by which the Attorney General designates, for a
renewable period of 3 years, a private entity as a multistakeholder regulatory organization (MRO) if that entity meets certain requirements, including that the entity presents a plan that ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications. The bill would require an applicant for designation by the Attorney General as an MRO to submit with its application a plan that contains certain elements, including the applicant’s approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration. The bill would require the Attorney General to adopt regulations, with input from stakeholders, that establish minimum requirements for those plans and conflict of interest rules for MROs, as specified.</html:p>
<html:p>This bill would require an MRO to perform various responsibilities related to certifying the safety of artificial intelligence models and artificial intelligence applications, including decertifying an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO and submitting an annual report to the Legislature and the Attorney General that addresses, among other things, the adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.</html:p>
<html:p>This bill would authorize the Attorney General to establish a fee structure for charging fees to applicants and designated MROs to offset the reasonable costs incurred by the Attorney General in carrying out its duties pursuant to the bill and adopt regulations necessary to administer the bill.</html:p>
<html:p>This bill would, in a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application against a developer of the artificial intelligence model or artificial intelligence application, create a rebuttable presumption that the developer exercised reasonable care if the artificial intelligence model or artificial intelligence application in question was
certified by an MRO at the time of the plaintiff’s injuries, as specified.</html:p>
</ns0:DigestText>
<ns0:DigestKey>
<ns0:VoteRequired>MAJORITY</ns0:VoteRequired>
<ns0:Appropriation>NO</ns0:Appropriation>
<ns0:FiscalCommittee>YES</ns0:FiscalCommittee>
<ns0:LocalProgram>NO</ns0:LocalProgram>
</ns0:DigestKey>
<ns0:MeasureIndicators>
<ns0:ImmediateEffect>NO</ns0:ImmediateEffect>
<ns0:ImmediateEffectFlags>
<ns0:Urgency>NO</ns0:Urgency>
<ns0:TaxLevy>NO</ns0:TaxLevy>
<ns0:Election>NO</ns0:Election>
<ns0:UsualCurrentExpenses>NO</ns0:UsualCurrentExpenses>
<ns0:BudgetBill>NO</ns0:BudgetBill>
<ns0:Prop25TrailerBill>NO</ns0:Prop25TrailerBill>
</ns0:ImmediateEffectFlags>
</ns0:MeasureIndicators>
</ns0:Description>
<ns0:Bill id="bill">
<ns0:Preamble>The people of the State of California do enact as follows:</ns0:Preamble>
<ns0:BillSection id="id_00C47628-6962-438E-B8DC-5546F74F608A">
<ns0:Num>SECTION 1.</ns0:Num>
<ns0:Content>
<html:p>The Legislature finds and declares all of the following:</html:p>
<html:p>
(a)
<html:span class="EnSpace"/>
A multistakeholder regulatory organization (MRO) tasked with defining standards based on best practices and certifying adherence to them is an agile, public-private model designed to promote innovation, ensure the security of artificial intelligence (AI) platforms, reduce regulatory uncertainty, and build societal trust.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
By proactively setting clear standards, creating tailored pathways for both established companies and emerging developers, and offering legal and economic incentives, the MRO transforms compliance
into a competitive advantage. It is not just about managing risks; it is about accelerating responsible growth and empowering businesses to confidently innovate and thrive in an AI-driven economy. Compliance with established standards confers a strong market advantage.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
Leveraging private sector and government cooperation to achieve what would otherwise require regulations is a proven approach that utilizes all available expertise while enhancing transparency among industry players, policymakers, and the public.
</html:p>
<html:p>
(d)
<html:span class="EnSpace"/>
Rather than relying on government agencies, semiprivate standards organizations with sector-specific expertise can better accommodate diverse market participants, varied technology use cases, and aligned public-private interests. This approach creates an adaptable
and predictable compliance mechanism that ensures AI governance standards can evolve quickly alongside technological advancements.
</html:p>
<html:p>
(e)
<html:span class="EnSpace"/>
Legal safeguards are effective means to incentivize responsible AI development and prevent unnecessary harm. Reducing litigation risk encourages investment and fosters innovation. It also promotes heightened care and adherence to best practices while maintaining accountability and effectively balancing risk mitigation with consumer and public protection.
</html:p>
<html:p>
(f)
<html:span class="EnSpace"/>
Public opinion research shows that while the public wants government to help establish guardrails for AI, a majority believe the government alone is incapable of effectively establishing these guardrails. An MRO surpasses traditional regulation by incentivizing a race to the top for
transparency and safety that prioritizes innovation and adaptability and serves as a central, informed voice to governments and society for responsive, forward-looking governance.
</html:p>
</ns0:Content>
</ns0:BillSection>
<ns0:BillSection id="id_07A3220D-125C-495B-84DD-2958EAE76AA4">
<ns0:Num>SEC. 2.</ns0:Num>
<ns0:ActionLine action="IS_ADDED" ns3:href="urn:caml:codes:GOV:caml#xpointer(%2Fcaml%3ALawDoc%2Fcaml%3ACode%2Fcaml%3ALawHeading%5B%40type%3D'TITLE'%20and%20caml%3ANum%3D'2.'%5D%2Fcaml%3ALawHeading%5B%40type%3D'DIVISION'%20and%20caml%3ANum%3D'1.'%5D%2Fcaml%3ALawHeading%5B%40type%3D'CHAPTER'%20and%20caml%3ANum%3D'14.'%5D)" ns3:label="fractionType: LAW_SPREAD||commencingWith: 8898" ns3:type="locator">
Chapter 14 (commencing with Section 8898) is added to Division 1 of Title 2 of the
<ns0:DocName>Government Code</ns0:DocName>
, to read:
</ns0:ActionLine>
<ns0:Fragment>
<ns0:LawHeading id="id_60650189-14A7-4741-B257-98DE799BD1C7" type="CHAPTER">
<ns0:Num>14.</ns0:Num>
<ns0:LawHeadingVersion id="id_AB30FC73-189E-43F8-A466-D8EAB2BF9B0E">
<ns0:LawHeadingText>Multistakeholder Regulatory Organizations</ns0:LawHeadingText>
</ns0:LawHeadingVersion>
<ns0:LawSection id="id_DC196E0A-D2E8-401D-BDD0-22D98D5502AE">
<ns0:Num>8898.</ns0:Num>
<ns0:LawSectionVersion id="id_ADFB974C-34BF-4E50-B41A-DC8071C87E57">
<ns0:Content>
<html:p>As used in this chapter:</html:p>
<html:p>
(a)
<html:span class="EnSpace"/>
“Artificial intelligence application” means a software program or system that uses artificial intelligence models to perform tasks that typically require human intelligence.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
“Artificial intelligence model” means an engineered or machine-based system that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
“Developer” means a person who develops an artificial intelligence model or artificial intelligence application that is deployed
in the state.
</html:p>
<html:p>
(d)
<html:span class="EnSpace"/>
“Multistakeholder regulatory organization (MRO)” means an entity designated as an MRO by the Attorney General pursuant to this chapter that performs the functions specified in Section 8898.3, including certification of developers’ exercise of heightened care and compliance with standards based on best practices for the prevention of personal injury and property damage with respect to an artificial intelligence model or application.
</html:p>
<html:p>
(e)
<html:span class="EnSpace"/>
“Plan” means a plan submitted pursuant to Section 8898.2.
</html:p>
<html:p>
(f)
<html:span class="EnSpace"/>
“Security vendor” means a third-party entity engaged by an MRO or developer to evaluate the safety and security of an artificial intelligence model or application through processes that include red
teaming, risk detection, and risk mitigation.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_11D8185C-79FE-4539-9E34-801B95D2E9CE">
<ns0:Num>8898.1.</ns0:Num>
<ns0:LawSectionVersion id="id_1AFF323E-978B-4B0A-B9F1-E58EBA9623A8">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
The Attorney General shall designate one or more MROs pursuant to this chapter.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
In complying with subdivision (a), the Attorney General shall determine whether an applicant MRO’s plan ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications by considering all of the following:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
The applicant’s personnel and the qualifications of those personnel.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
The quality of the applicant’s plan with respect to ensuring that artificial intelligence model and
application developers exercise heightened care and comply with best practice-based standards for the prevention of personal injury and property damage, considering factors including, but not limited to, both of the following:
</html:p>
<html:p>
(A)
<html:span class="EnSpace"/>
The viability and rigor of the applicant’s evaluation methods, technologies, and administrative procedures.
</html:p>
<html:p>
(B)
<html:span class="EnSpace"/>
The adequacy of the applicant’s plan to develop measurable standards for evaluating artificial intelligence developers’ mitigation of risks.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
The applicant’s independence from the artificial intelligence industry.
</html:p>
<html:p>
(4)
<html:span class="EnSpace"/>
Whether the applicant serves a particular existing or potential artificial intelligence industry
segment.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
A designation as an MRO under this section shall expire after three years, and the MRO may apply for a new designation.
</html:p>
<html:p>
(d)
<html:span class="EnSpace"/>
The Attorney General may revoke a designation if any of the following is true:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
The MRO’s plan is materially misleading or inaccurate.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
The MRO systematically fails to adhere to its plan.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
A material change compromises the MRO’s independence from the artificial intelligence industry.
</html:p>
<html:p>
(4)
<html:span class="EnSpace"/>
Evolution of technology renders the MRO’s methods obsolete for ensuring acceptable levels of risk of personal injury and property damage.
</html:p>
<html:p>
(5)
<html:span class="EnSpace"/>
An artificial intelligence model or artificial intelligence application certified by the MRO causes a significant harm.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_51007D79-21DF-4FE5-A22B-1E324E927A51">
<ns0:Num>8898.2.</ns0:Num>
<ns0:LawSectionVersion id="id_8BDD6942-E77B-41A2-8F40-E9DAEC89717E">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
An applicant to the Attorney General for designation as an MRO shall submit with its application a plan that contains all of the following elements:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
The applicant’s approach to auditing of artificial intelligence models and artificial intelligence applications to verify that an artificial intelligence developer has exercised heightened care and adhered to predeployment and postdeployment best practices and procedures to prevent personal injury or property damage caused by the artificial intelligence model or artificial intelligence application.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
The applicant’s approach to mitigating
specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
An approach to ensuring disclosure by developers to the MRO of risks detected, incident reports, and risk mitigation efforts.
</html:p>
<html:p>
(4)
<html:span class="EnSpace"/>
An approach to specifying the scope and duration of certification of an artificial intelligence model or artificial intelligence application, including technical thresholds for updates requiring renewed certification.
</html:p>
<html:p>
(5)
<html:span class="EnSpace"/>
An approach to data collection for public reporting from audited developers and vendors that addresses all of the following:
</html:p>
<html:p>
(A)
<html:span class="EnSpace"/>
Aggregating and tracking evaluation data from certified labs.
</html:p>
<html:p>
(B)
<html:span class="EnSpace"/>
Categories of metadata to be aggregated and tracked.
</html:p>
<html:p>
(C)
<html:span class="EnSpace"/>
Measures to protect trade secrets and mitigate antitrust risk from information sharing.
</html:p>
<html:p>
(6)
<html:span class="EnSpace"/>
The applicant’s intended use, if any, of security vendors to evaluate artificial intelligence developers, models, or applications, including a method of certifying and training vendors to accurately evaluate an artificial intelligence model or developer exercising heightened care and complying with best practices.
</html:p>
<html:p>
(7)
<html:span class="EnSpace"/>
Implementation and enforcement of whistleblower protections among certified developers.
</html:p>
<html:p>
(8)
<html:span class="EnSpace"/>
Remediation of postcertification noncompliance.
</html:p>
<html:p>
(9)
<html:span class="EnSpace"/>
An approach to reporting of societal risks and benefits identified through auditing.
</html:p>
<html:p>
(10)
<html:span class="EnSpace"/>
An approach to interfacing effectively with federal and non-California state authorities.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
The plan submitted pursuant to this section may be tailored to a particular artificial intelligence market segment.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
An applicant shall annually audit all of the following to ensure independence from the artificial intelligence industry and report the findings of its audit to the Attorney General:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
The applicant’s board composition.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
The availability of resources to implement the applicant’s plan.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
The applicant’s funding sources.
</html:p>
<html:p>
(4)
<html:span class="EnSpace"/>
Representation of civil society representatives in evaluation and reporting functions.
</html:p>
<html:p>
(d)
<html:span class="EnSpace"/>
The Attorney General shall not modify a plan submitted pursuant to this section.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_1A594BD8-5CF4-485C-8398-B2E89AB2E8DC">
<ns0:Num>8898.2.5.</ns0:Num>
<ns0:LawSectionVersion id="id_98913BB8-7AE8-4C4F-8CB3-5D44A423B0A3">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
The Attorney General shall adopt regulations, with input from stakeholders, that establish both of the following:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
Minimum requirements for plans required to be submitted pursuant to Section 8898.2.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
Conflict of interest rules for MROs that include, but are not limited to, reporting requirements on boards of directors and donors funding the MRO to ensure adequate independence from the artificial intelligence industry and transparency on revenues streaming from certification services.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
The Attorney General may establish a fee structure for charging fees to applicants and designated MROs to offset
the reasonable costs incurred by the Attorney General in carrying out its duties pursuant to this chapter.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
The Attorney General may adopt regulations necessary to administer this chapter.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_8BB995E3-5364-40E6-A3AA-63F50214E6FE">
<ns0:Num>8898.3.</ns0:Num>
<ns0:LawSectionVersion id="id_8CFF1DE1-A3C8-40F9-A450-AE1D108C5C66">
<ns0:Content>
<html:p>An MRO designated pursuant to this chapter shall do all of the following:</html:p>
<html:p>
(a)
<html:span class="EnSpace"/>
Ensure developers’ and security vendors’ exercise of heightened care and compliance with best practices for the prevention of personal injury and property damage and certify qualified artificial intelligence models or artificial intelligence applications that meet the requirements prescribed by the
MRO.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
Implement the plan submitted pursuant to Section 8898.2.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
Decertify an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO.
</html:p>
<html:p>
(d)
<html:span class="EnSpace"/>
Submit to the Legislature, pursuant to Section 9795, and to the Attorney General an annual report that addresses all of the
following:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
Aggregated information on capabilities of artificial intelligence models, the observed societal risks and benefits associated with those capabilities, and the potential societal risks and benefits associated with those capabilities.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
The adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
Developer and security vendor certifications.
</html:p>
<html:p>
(4)
<html:span class="EnSpace"/>
Aggregated results of certification assessments.
</html:p>
<html:p>
(5)
<html:span class="EnSpace"/>
Remedial measures prescribed by the MRO and whether the developer or security vendor complied with those measures.
</html:p>
<html:p>
(6)
<html:span class="EnSpace"/>
Identified additional risks outside personal injury or property damage and the adequacy of existing mitigation measures to address those risks.
</html:p>
<html:p>
(e)
<html:span class="EnSpace"/>
Retain for 10 years a document that is related to the MRO’s activities under this chapter.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_EA866BFC-605B-4C45-BD84-A1B17BD166A1">
<ns0:Num>8898.4.</ns0:Num>
<ns0:LawSectionVersion id="id_6C092443-CEE4-45DE-B190-0AF4B731F11C">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
In a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application against a developer of the artificial intelligence model or artificial intelligence application, there shall be a rebuttable presumption that the developer exercised reasonable care if the
artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiff’s injuries.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
The rebuttable presumption provided for in this section may be overcome by the introduction of admissible evidence the court finds contrary to the presumption.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
</ns0:LawHeading>
</ns0:Fragment>
</ns0:BillSection>
</ns0:Bill>
</ns0:MeasureDoc>
|
| Last Version Text Digest |
Existing law requires, on or before September 1, 2024, the Department of Technology to conduct, in coordination with other interagency bodies as it deems appropriate, a comprehensive inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, any state agency. The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence (GenAI) system to offer the user the option to include a manifest disclosure in image, video, or audio content, or content that is any combination thereof, created or altered by the covered provider’s GenAI system that, among other things, identifies content as AI-generated content. This bill would establish a process by which the Attorney General designates, for a renewable period of 3 years, a private entity as a multistakeholder regulatory organization (MRO) if that entity meets certain requirements, including that the entity presents a plan that ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications. The bill would require an applicant for designation by the Attorney General as an MRO to submit with its application a plan that contains certain elements, including the applicant’s approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration. The bill would require the Attorney General to adopt regulations, with input from stakeholders, that establish minimum requirements for those plans and conflict of interest rules for MROs, as specified. This bill would require an MRO to perform various responsibilities related to certifying the safety of artificial intelligence models and artificial intelligence applications, including decertifying an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO and submitting an annual report to the Legislature and the Attorney General that addresses, among other things, the adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks. This bill would authorize the Attorney General to establish a fee structure for charging fees to applicants and designated MROs to offset the reasonable costs incurred by the Attorney General in carrying out its duties pursuant to the bill and adopt regulations necessary to administer the bill. This bill would, in a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application against a developer of the artificial intelligence model or artificial intelligence application, create a rebuttable presumption that the developer exercised reasonable care if the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiff’s injuries, as specified. |