| Last Version Text |
<?xml version="1.0" ?>
<ns0:MeasureDoc xmlns:html="http://www.w3.org/1999/xhtml" xmlns:ns0="http://lc.ca.gov/legalservices/schemas/caml.1#" xmlns:ns3="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.0" xsi:schemaLocation="http://lc.ca.gov/legalservices/schemas/caml.1# xca.1.xsd">
<ns0:Description>
<ns0:Id>20250SB__042097AMD</ns0:Id>
<ns0:VersionNum>97</ns0:VersionNum>
<ns0:History>
<ns0:Action>
<ns0:ActionText>INTRODUCED</ns0:ActionText>
<ns0:ActionDate>2025-02-18</ns0:ActionDate>
</ns0:Action>
<ns0:Action>
<ns0:ActionText>AMENDED_SENATE</ns0:ActionText>
<ns0:ActionDate>2025-03-26</ns0:ActionDate>
</ns0:Action>
<ns0:Action>
<ns0:ActionText>AMENDED_SENATE</ns0:ActionText>
<ns0:ActionDate>2025-05-23</ns0:ActionDate>
</ns0:Action>
</ns0:History>
<ns0:LegislativeInfo>
<ns0:SessionYear>2025</ns0:SessionYear>
<ns0:SessionNum>0</ns0:SessionNum>
<ns0:MeasureType>SB</ns0:MeasureType>
<ns0:MeasureNum>420</ns0:MeasureNum>
<ns0:MeasureState>AMD</ns0:MeasureState>
</ns0:LegislativeInfo>
<ns0:AuthorText authorType="LEAD_AUTHOR">Introduced by Senator Padilla</ns0:AuthorText>
<ns0:Authors>
<ns0:Legislator>
<ns0:Contribution>LEAD_AUTHOR</ns0:Contribution>
<ns0:House>SENATE</ns0:House>
<ns0:Name>Padilla</ns0:Name>
</ns0:Legislator>
</ns0:Authors>
<ns0:Title> An act to add Chapter 24.6 (commencing with Section 22756) to Division 8 of the Business and Professions Code, and to add Article 11 (commencing with Section 10285.8) to Chapter 1 of Part 2 of Division 2 of the Public Contract Code, relating to artificial intelligence.</ns0:Title>
<ns0:RelatingClause>artificial intelligence</ns0:RelatingClause>
<ns0:GeneralSubject>
<ns0:Subject>Automated decision systems.</ns0:Subject>
</ns0:GeneralSubject>
<ns0:DigestText>
<html:p>The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence system to make available an AI detection tool at no cost to the user that meets certain criteria, including that the tool outputs any system provenance data, as defined, that is detected in the content. The California Consumer Privacy Act of 2018 grants a consumer various rights with respect to personal information that is collected or sold by a business, as defined, including the right to direct a business that sells or shares personal information about the consumer to third parties not to sell or share the consumer’s personal information, as specified.</html:p>
<html:p>This bill would generally regulate a developer or a deployer of a high-risk automated decision system, as defined, including by requiring a developer or a deployer to perform an
impact assessment on the high-risk automated decision system before making it publicly available or deploying it, as prescribed. The bill would require a state agency to require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment and would require the state agency to keep that impact assessment confidential. The bill would also require a developer to provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment and would require the impact assessment to be kept confidential.</html:p>
<html:p>This bill would authorize the Attorney General or the Civil Rights Department to bring a specified civil action to enforce compliance with the bill, as prescribed, and would authorize the Attorney General or
the Civil Rights Department to allow a developer or deployer to cure, within 45 days of receiving a certain notice of a violation, the noticed violation, as prescribed.</html:p>
<html:p>This bill would prohibit a state agency from awarding a contract for a high-risk automated decision system unless the person to whom the contract is awarded has certified
that the high-risk automated decision system does not violate, among other civil rights laws, the bill. By expanding the scope of the crime of perjury, this bill would impose a state-mandated local program.</html:p>
<html:p>Existing constitutional provisions require that a statute that limits the right of access to the meetings of public bodies or the writings of public officials and agencies be adopted with findings demonstrating the interest protected by the limitation and the need for protecting that interest.</html:p>
<html:p>This bill would make legislative findings to that effect.</html:p>
<html:p>The California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory
provisions establish procedures for making that reimbursement.</html:p>
<html:p>This bill would provide that no reimbursement is required by this act for a specified reason.</html:p>
</ns0:DigestText>
<ns0:DigestKey>
<ns0:VoteRequired>MAJORITY</ns0:VoteRequired>
<ns0:Appropriation>NO</ns0:Appropriation>
<ns0:FiscalCommittee>YES</ns0:FiscalCommittee>
<ns0:LocalProgram>YES</ns0:LocalProgram>
</ns0:DigestKey>
<ns0:MeasureIndicators>
<ns0:ImmediateEffect>NO</ns0:ImmediateEffect>
<ns0:ImmediateEffectFlags>
<ns0:Urgency>NO</ns0:Urgency>
<ns0:TaxLevy>NO</ns0:TaxLevy>
<ns0:Election>NO</ns0:Election>
<ns0:UsualCurrentExpenses>NO</ns0:UsualCurrentExpenses>
<ns0:BudgetBill>NO</ns0:BudgetBill>
<ns0:Prop25TrailerBill>NO</ns0:Prop25TrailerBill>
</ns0:ImmediateEffectFlags>
</ns0:MeasureIndicators>
</ns0:Description>
<ns0:Bill id="bill">
<ns0:Preamble>The people of the State of California do enact as follows:</ns0:Preamble>
<ns0:BillSection id="id_D2FE9583-9406-4F91-BA54-EC06BED70001">
<ns0:Num>SECTION 1.</ns0:Num>
<ns0:Content>
<html:p>The Legislature finds and declares all of the following:</html:p>
<html:p>
(a)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
Artificial intelligence technologies are becoming an integral part of daily life in California and have profound implications for privacy, equity, fairness, and public safety.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
It is critical to protect individuals’ rights to safeguard against potential harms, including discrimination, privacy violations, and unchecked automation in critical decisionmaking processes.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
A comprehensive set of rights must be established to ensure artificial
intelligence technologies align with the public interest and reflect the values of California residents.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
Individuals should have the right to receive a clear and accessible explanation about how artificial intelligence systems operate, including the data they use and the decisions they make.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
An entity that uses artificial intelligence systems to make decisions impacting California residents should provide a mechanism to inform individuals of the system’s logic, processing methods, and intended outcomes in a manner that is understandable.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
All individuals have the right to control their personal data in relation to artificial intelligence systems. Artificial intelligence
systems should operate with the highest standards of data privacy and security, in line with the California Consumer Privacy Act of 2018 and other relevant privacy laws.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
Before personal data is used in artificial intelligence systems, entities should obtain informed, explicit consent from individuals, and individuals should have the right to withdraw consent at any time without penalty.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
Entities should ensure that personal data used by artificial intelligence systems is anonymized or pseudonymized if feasible, and data retention should be limited to the purposes for which the data was initially collected.
</html:p>
<html:p>
(d)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
Artificial intelligence systems should not discriminate against individuals based
on race, gender, sexual orientation, disability, religion, socioeconomic status, or other protected characteristics under California law.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
Entities deploying artificial intelligence technologies should perform regular audits to identify and address any biases or inequities in their artificial intelligence systems and should ensure that artificial intelligence systems are designed and trained to promote fairness and equal treatment.
</html:p>
<html:p>
(e)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
Individuals should have the right to hold entities accountable for any harm caused by artificial intelligence systems, and entities should be liable for the actions and decisions made by artificial intelligence technologies they deploy.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
An individual or
group adversely affected by artificial intelligence-driven decisions should have access to a straightforward and transparent process for seeking redress, including the ability to challenge those decisions through human review and appeal mechanisms.
</html:p>
<html:p>
(f)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
Individuals should have the right to request human oversight for significant decisions made by artificial intelligence systems that impact them, particularly in areas such as employment, health care, housing, education, and criminal justice.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
Artificial intelligence systems in high-stakes decisionmaking contexts should involve human review or intervention before final decisions, ensuring that automated decisions align with human values and public policy goals.
</html:p>
</ns0:Content>
</ns0:BillSection>
<ns0:BillSection id="id_2385B6F7-E5BF-49A9-A458-050F41AAA865">
<ns0:Num>SEC. 2.</ns0:Num>
<ns0:ActionLine action="IS_ADDED" ns3:href="urn:caml:codes:BPC:caml#xpointer(%2Fcaml%3ALawDoc%2Fcaml%3ACode%2Fcaml%3ALawHeading%5B%40type%3D'DIVISION'%20and%20caml%3ANum%3D'8.'%5D%2Fcaml%3ALawHeading%5B%40type%3D'CHAPTER'%20and%20caml%3ANum%3D'24.6.'%5D)" ns3:label="fractionType: LAW_SPREAD||commencingWith: 22756" ns3:type="locator">
Chapter 24.6 (commencing with Section 22756) is added to Division 8 of the
<ns0:DocName>Business and Professions Code</ns0:DocName>
, to read:
</ns0:ActionLine>
<ns0:Fragment>
<ns0:LawHeading id="id_1678F821-CD6D-4FB3-A504-011768848F01" type="CHAPTER">
<ns0:Num>24.6.</ns0:Num>
<ns0:LawHeadingVersion id="id_52CF4E83-5691-402B-B03C-89259D1E0BE9">
<ns0:LawHeadingText>Automated Decision Systems</ns0:LawHeadingText>
</ns0:LawHeadingVersion>
<ns0:LawSection id="id_3173CBF4-7EE1-4B80-9429-94E6C6BDBAAA">
<ns0:Num>22756.</ns0:Num>
<ns0:LawSectionVersion id="id_8A772B7B-B5DC-44AB-902E-C4AFFA8CC017">
<ns0:Content>
<html:p>As used in this chapter:</html:p>
<html:p>
(a)
<html:span class="EnSpace"/>
“Algorithmic discrimination” means the condition in which an automated decision system contributes to unlawful discrimination on the basis of a protected classification.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
“Artificial intelligence” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
“Automated decision system” means a computational process derived
from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
“Automated decision system” does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.
</html:p>
<html:p>
(d)
<html:span class="EnSpace"/>
“Deployer” means a natural person or entity that uses a high-risk automated decision system in the state.
</html:p>
<html:p>
(e)
<html:span class="EnSpace"/>
“Detecting decisionmaking patterns without influencing outcomes” means the act of artificial intelligence analyzing patterns for informational
purposes without direct influence on decisions.
</html:p>
<html:p>
(f)
<html:span class="EnSpace"/>
“Developer” means a natural person or entity that designs, codes, produces, or makes a substantial modification to a high-risk automated decision system for use in the state.
</html:p>
<html:p>
(g)
<html:span class="EnSpace"/>
“Education enrollment or opportunity” means the chance to obtain admission, accreditation, evaluation, certification, vocational training, financial aid, or scholarships with respect to an educational opportunity.
</html:p>
<html:p>
(h)
<html:span class="EnSpace"/>
“Employment or employment opportunity” means hiring, salary, wage, or other material
term, condition, or privilege of an employee’s employment.
</html:p>
<html:p>
(i)
<html:span class="EnSpace"/>
“Health care” means health care services or insurance for health, mental health, dental, or vision.
</html:p>
<html:p>
(j)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
“High-risk automated decision system” means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, any of the following:
</html:p>
<html:p>
(A)
<html:span class="EnSpace"/>
Education enrollment or opportunity.
</html:p>
<html:p>
(B)
<html:span class="EnSpace"/>
Employment or employment opportunity.
</html:p>
<html:p>
(C)
<html:span class="EnSpace"/>
Essential utilities.
</html:p>
<html:p>
(D)
<html:span class="EnSpace"/>
Temporary, short-term, or long-term housing.
</html:p>
<html:p>
(E)
<html:span class="EnSpace"/>
Health care services.
</html:p>
<html:p>
(F)
<html:span class="EnSpace"/>
Lending services.
</html:p>
<html:p>
(G)
<html:span class="EnSpace"/>
A legal right or service.
</html:p>
<html:p>
(H)
<html:span class="EnSpace"/>
An essential government service.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
“High-risk automated decision system” does not include an automated decision system that only performs narrow procedural tasks, enhances human activities, detects patterns without influencing decisions, or assists in preparatory tasks for assessment.
</html:p>
<html:p>
(k)
<html:span class="EnSpace"/>
“Improving results of previously completed human activities”
means the act of artificial intelligence enhancing existing human-performed tasks without altering decisions.
</html:p>
<html:p>
(l)
<html:span class="EnSpace"/>
“Narrow procedural task” means a limited, procedural task that has a minimal impact on outcomes.
</html:p>
<html:p>
(m)
<html:span class="EnSpace"/>
“Preparatory task for assessment” means a task in which an artificial intelligence aids in a preparatory task for assessment or evaluation without direct decisionmaking authority.
</html:p>
<html:p>
(n)
<html:span class="EnSpace"/>
“Protected classification” means a classification protected under existing law prohibiting discrimination, including, but not limited to, the California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of
Division 3 of Title 2 of the Government Code) or the Unruh Civil Rights Act (Section 51 of the Civil Code).
</html:p>
<html:p>
(o)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
“State agency” means any of the following:
</html:p>
<html:p>
(A)
<html:span class="EnSpace"/>
A state office, department, division, or bureau.
</html:p>
<html:p>
(B)
<html:span class="EnSpace"/>
The California State University.
</html:p>
<html:p>
(C)
<html:span class="EnSpace"/>
The Board of Parole Hearings.
</html:p>
<html:p>
(D)
<html:span class="EnSpace"/>
A board or other professional licensing and regulatory body under the administration or oversight of the Department of Consumer Affairs.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
“State agency” does not include the University of California, the Legislature, the judicial
branch, or a board that is not described in paragraph (1).
</html:p>
<html:p>
(p)
<html:span class="EnSpace"/>
“Substantial modification” means a new version, release, or other significant update that materially changes the functionality or performance of a high-risk automated decision system, including the results of retraining.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_E2BF1D4B-CB28-4FB8-8C25-536974376028">
<ns0:Num>22756.1.</ns0:Num>
<ns0:LawSectionVersion id="id_D6DD5B53-3E3E-4AC2-AACA-8A134D1EC5D3">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment
on or before January 1, 2028.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met:
</html:p>
<html:p>
(A)
<html:span class="EnSpace"/>
The state agency does not make a substantial modification to the high-risk automated decision system.
</html:p>
<html:p>
(B)
<html:span class="EnSpace"/>
The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d).
</html:p>
<html:p>
(C)
<html:span class="EnSpace"/>
The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as
intended by the developer is likely to result in algorithmic discrimination.
</html:p>
<html:p>
(D)
<html:span class="EnSpace"/>
The state agency is in compliance with Section 22756.4.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
A developer shall make available to deployers and potential deployers the statements included in the developer’s impact assessment pursuant to
Section 22756.2.
</html:p>
<html:p>
(d)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_95E0578E-EC20-49E3-A173-313394F5A6AE">
<ns0:Num>22756.2.</ns0:Num>
<ns0:LawSectionVersion id="id_CC41330D-7971-4F02-9447-89D48ABBF6C3">
<ns0:Content>
<html:p>An impact assessment prepared pursuant to this chapter shall include all of the following:</html:p>
<html:p>
(a)
<html:span class="EnSpace"/>
A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
A description of the high-risk automated decision system’s intended outputs.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system.
</html:p>
<html:p>
(d)
<html:span class="EnSpace"/>
A summary
of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system.
</html:p>
<html:p>
(e)
<html:span class="EnSpace"/>
A developer’s impact assessment shall also include both of the following:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
A description of safeguards implemented or other measures taken to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.
</html:p>
<html:p>
(f)
<html:span class="EnSpace"/>
A statement of the extent to which the deployer’s use of the high-risk automated decision system is consistent
with, or varies from, the developer’s statement of the high-risk automated decision system’s purpose and intended benefits, intended uses, and intended deployment contexts.
</html:p>
<html:p>
(g)
<html:span class="EnSpace"/>
A description of safeguards implemented or other measures taken to mitigate and guard against any known risks of discrimination arising from the high-risk automated decision system.
</html:p>
<html:p>
(h)
<html:span class="EnSpace"/>
A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_0CD89593-FC11-4EF3-A0B9-064B7FE09688">
<ns0:Num>22756.3.</ns0:Num>
<ns0:LawSectionVersion id="id_C5B23E0C-156C-4C51-8CB5-30C5002F09D2">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
The purpose of the high-risk automated decision system and the specific decision it was used to make.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
How the high-risk automated decision system was used to make the decision.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
The type of data used by the high-risk automated decision system.
</html:p>
<html:p>
(4)
<html:span class="EnSpace"/>
Contact information for the deployer.
</html:p>
<html:p>
(5)
<html:span class="EnSpace"/>
A link to the statement required by subdivision (b).
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
A deployer shall make available on its internet website a statement summarizing all of the following:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
The types of high-risk automated decision systems it currently deploys.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_698D2BB0-8CAD-4A34-A50E-9702E0A351F5">
<ns0:Num>22756.4.</ns0:Num>
<ns0:LawSectionVersion id="id_2A969697-391D-4708-ADC3-2871877BC3D9">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
A developer or a deployer shall establish, document, implement, and maintain a governance program that
does all of the following:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
Contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
Aligns with existing standards and frameworks, including the National Institute of Standards and Technology AI Risk Management Framework.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
Specifies and incorporates the principles, processes, and personnel used to identify, document, and mitigate foreseeable risks of algorithmic discrimination.
</html:p>
<html:p>
(4)
<html:span class="EnSpace"/>
Is regularly reviewed and updated.
</html:p>
<html:p>
(5)
<html:span class="EnSpace"/>
Includes a structural framework for documenting, investigating, and resolving incidents.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
The governance program required by this section shall be appropriately designed with respect to all of the following:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
The use, or intended use, of the high-risk automated decision system.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
The size, complexity, and resources of the deployer or developer.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
The nature, context, and scope of the activities of the deployer
or developer in connection with the high-risk automated decision system.
</html:p>
<html:p>
(4)
<html:span class="EnSpace"/>
The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_3EC03E41-202B-456B-AB82-BDDB219D921F">
<ns0:Num>22756.5.</ns0:Num>
<ns0:LawSectionVersion id="id_2A8AC49F-994C-4A6D-A67C-038A0462747B">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
A developer or deployer is not required to disclose information under this chapter if the disclosure of that information would result in the waiver of a legal privilege or the disclosure of a trade secret, as defined in Section 3426.1 of the Civil Code.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
If a disclosure is not made because the disclosure would reveal a trade secret,
the developer or deployer shall notify the party to whom the disclosure would have otherwise been made of the basis for which the disclosure was not made.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_42787BF1-E10E-4F17-A593-770614358E3D">
<ns0:Num>22756.6.</ns0:Num>
<ns0:LawSectionVersion id="id_5A913CCC-F81E-4A11-A6C9-222331E7BB18">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed
pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_D794165B-5212-4266-9C13-6C74F4F5571F">
<ns0:Num>22756.7.</ns0:Num>
<ns0:LawSectionVersion id="id_D4417225-C296-4B67-B10C-CDFFA9C1DCD3">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
The Attorney General or the Civil Rights Department may bring a civil action against a deployer or developer for a violation of this chapter and obtain any of the following relief:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
(A)
<html:span class="EnSpace"/>
If a developer or deployer fails to conduct an impact assessment as required under this chapter, a civil penalty of two thousand five hundred dollars ($2,500) for a defendant with fewer than 100 employees, five thousand dollars ($5,000) if the defendant has fewer than 500 employees, and ten thousand dollars ($10,000) if the defendant has at least 500 employees.
</html:p>
<html:p>
(B)
<html:span class="EnSpace"/>
If a violation is intentional, the civil penalty pursuant to this paragraph shall increase by five hundred dollars ($500) for each day that the defendant is noncompliant.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
Injunctive relief.
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
Reasonable attorney’s fees and costs.
</html:p>
<html:p>
(4)
<html:span class="EnSpace"/>
If the violation concerns algorithmic discrimination, a civil penalty of twenty-five thousand dollars ($25,000) per violation.
</html:p>
<html:p>
(c)
<html:span class="EnSpace"/>
(1)
<html:span class="EnSpace"/>
Before commencing an action pursuant to this section, the Attorney General or the Civil Rights Department shall provide 45 days’ written notice to a deployer or developer of any alleged violation of this chapter.
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
The Attorney General or the Civil Rights Department may, at its discretion, provide to a developer or a deployer with a time period to cure the alleged violation after considering all of the following:
</html:p>
<html:p>
(A)
<html:span class="EnSpace"/>
A lack of intent to commit the violation.
</html:p>
<html:p>
(B)
<html:span class="EnSpace"/>
Voluntary efforts undertaken to cure the alleged violation before being notified of the violation.
</html:p>
<html:p>
(C)
<html:span class="EnSpace"/>
The size and economic resources of the noncompliant developer or deployer.
</html:p>
<html:p>
(D)
<html:span class="EnSpace"/>
The size and scope of the impact of the decisions made by an automated decision system related to the violation.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
<ns0:LawSection id="id_7F9798E3-20E2-4D9D-8247-581BA3687922">
<ns0:Num>22756.8.</ns0:Num>
<ns0:LawSectionVersion id="id_BD961D43-3163-44DA-8306-DD6DC0AF2104">
<ns0:Content>
<html:p>This chapter does not apply to either of the following:</html:p>
<html:p>
(a)
<html:span class="EnSpace"/>
An entity with 50 or fewer employees.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
A high-risk automated decision system that has been approved, certified, or cleared by a federal agency that complies with another law that is substantially the same or more stringent than this chapter.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
</ns0:LawHeading>
</ns0:Fragment>
</ns0:BillSection>
<ns0:BillSection id="id_0CCAF8A3-BE8E-4573-BD9C-3F48EA77794B">
<ns0:Num>SEC. 3.</ns0:Num>
<ns0:ActionLine action="IS_ADDED" ns3:href="urn:caml:codes:PCC:caml#xpointer(%2Fcaml%3ALawDoc%2Fcaml%3ACode%2Fcaml%3ALawHeading%5B%40type%3D'DIVISION'%20and%20caml%3ANum%3D'2.'%5D%2Fcaml%3ALawHeading%5B%40type%3D'PART'%20and%20caml%3ANum%3D'2.'%5D%2Fcaml%3ALawHeading%5B%40type%3D'CHAPTER'%20and%20caml%3ANum%3D'1.'%5D%2Fcaml%3ALawHeading%5B%40type%3D'ARTICLE'%20and%20caml%3ANum%3D'11.'%5D)" ns3:label="fractionType: LAW_SPREAD||commencingWith: 10285.8" ns3:type="locator">
Article 11 (commencing with Section 10285.8) is added to Chapter 1 of Part 2 of Division 2 of the
<ns0:DocName>Public Contract Code</ns0:DocName>
, to read:
</ns0:ActionLine>
<ns0:Fragment>
<ns0:LawHeading id="id_726CA5F0-78AA-46E8-B5A9-E129E64D9A4F" type="ARTICLE">
<ns0:Num>11.</ns0:Num>
<ns0:LawHeadingVersion id="id_697BFE1F-B6C7-4598-AC3F-3FF5EA5F490D">
<ns0:LawHeadingText>High-Risk Automated Decision Systems</ns0:LawHeadingText>
</ns0:LawHeadingVersion>
<ns0:LawSection id="id_123FB328-5BDC-4CCB-806F-99C63F340A8B">
<ns0:Num>10285.8.</ns0:Num>
<ns0:LawSectionVersion id="id_53891F56-31D4-4CAF-87C3-CC7CBC34DEFE">
<ns0:Content>
<html:p>
(a)
<html:span class="EnSpace"/>
A state agency shall not award a contract for a high-risk automated decision system unless the person to whom the contract is awarded has certified that the high-risk automated decision system does not violate any of the following:
</html:p>
<html:p>
(1)
<html:span class="EnSpace"/>
The Unruh Civil Rights Act (Section 51 of the Civil Code).
</html:p>
<html:p>
(2)
<html:span class="EnSpace"/>
The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title
2 of the Government Code).
</html:p>
<html:p>
(3)
<html:span class="EnSpace"/>
Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code.
</html:p>
<html:p>
(b)
<html:span class="EnSpace"/>
As used in this section, “high-risk automated decision system” has the same meaning as defined in Section 22756 of the Business and Professions Code.
</html:p>
</ns0:Content>
</ns0:LawSectionVersion>
</ns0:LawSection>
</ns0:LawHeading>
</ns0:Fragment>
</ns0:BillSection>
<ns0:BillSection id="id_AE493C6B-D15E-40D1-BB67-6A3298AD25C2">
<ns0:Num>SEC. 4.</ns0:Num>
<ns0:Content>
<html:p>The Legislature finds and declares that Section 2 of this act, which adds Chapter 24.6 (commencing with Section 22756) to Division 8 of the Business and Professions Code, imposes a limitation on the public’s right of access to the meetings of public bodies or the writings of public officials and agencies within the meaning of Section 3 of Article I of the California Constitution. Pursuant to that constitutional provision, the Legislature makes the following findings to demonstrate the interest protected by this limitation and the need for protecting that interest:</html:p>
<html:p>To
avoid unduly disrupting commerce, it is necessary that trade secrets be protected.</html:p>
</ns0:Content>
</ns0:BillSection>
<ns0:BillSection id="id_C1A81390-0846-44EF-9872-42687AD41418">
<ns0:Num>SEC. 5.</ns0:Num>
<ns0:Content>
<html:p>
No reimbursement is required by this act pursuant to Section 6 of Article XIII
<html:span class="ThinSpace"/>
B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII
<html:span class="ThinSpace"/>
B of the California Constitution.
</html:p>
</ns0:Content>
</ns0:BillSection>
</ns0:Bill>
</ns0:MeasureDoc>
|