Session:   

Bill

Home - Bills - Bill - Authors - Dates - Keywords - Tags - Locations

Measure AB 1064
Authors Bauer-Kahan  
Coauthors: Pellerin  
Subject Leading Ethical AI Development (LEAD) for Kids Act.
Relating To relating to artificial intelligence.
Title An act to add Chapter 25.1 (commencing with Section 22757.20) to Division 8 of the Business and Professions Code, relating to artificial intelligence.
Last Action Dt 2025-09-15
State Enrolled
Status Vetoed
Active? Y
Vote Required Majority
Appropriation No
Fiscal Committee Yes
Local Program No
Substantive Changes None
Urgency No
Tax Levy No
Leginfo Link Bill
Actions
2025-10-13     Consideration of Governor's veto pending.
2025-10-13     Vetoed by Governor.
2025-09-23     Enrolled and presented to the Governor at 4 p.m.
2025-09-11     Senate amendments concurred in. To Engrossing and Enrolling. (Ayes 60. Noes 8. Page 3334.).
2025-09-10     In Assembly. Concurrence in Senate amendments pending.
2025-09-10     Read third time. Passed. Ordered to the Assembly. (Ayes 31. Noes 6. Page 2800.).
2025-09-08     Read second time. Ordered to third reading.
2025-09-05     Read third time and amended. Ordered to second reading.
2025-09-02     Read second time. Ordered to third reading.
2025-08-29     Read second time and amended. Ordered returned to second reading.
2025-08-29     From committee: Amend, and do pass as amended. (Ayes 5. Noes 2.) (August 29).
2025-08-18     In committee: Referred to suspense file.
2025-07-17     Read second time and amended. Re-referred to Com. on APPR.
2025-07-16     From committee: Amend, and do pass as amended and re-refer to Com. on APPR. (Ayes 11. Noes 1.) (July 15).
2025-06-11     Referred to Com. on JUD.
2025-06-03     In Senate. Read first time. To Com. on RLS. for assignment.
2025-06-02     Read third time. Passed. Ordered to the Senate. (Ayes 59. Noes 12. Page 1935.)
2025-05-27     Read second time. Ordered to third reading.
2025-05-23     Read second time and amended. Ordered returned to second reading.
2025-05-23     From committee: Amend, and do pass as amended. (Ayes 11. Noes 3.) (May 23).
2025-05-23     Assembly Rule 63 suspended. (Ayes 51. Noes 16. Page 1644.)
2025-05-14     In committee: Set, first hearing. Referred to APPR. suspense file.
2025-05-05     Re-referred to Com. on APPR.
2025-05-01     Read second time and amended.
2025-04-30     From committee: Amend, and do pass as amended and re-refer to Com. on APPR. (Ayes 9. Noes 3.) (April 29).
2025-04-23     From committee: Do pass and re-refer to Com. on JUD. (Ayes 10. Noes 3.) (April 22). Re-referred to Com. on JUD.
2025-04-23     Coauthors revised.
2025-04-21     Re-referred to Com. on P. & C.P.
2025-04-10     From committee chair, with author's amendments: Amend, and re-refer to Com. on P. & C.P. Read second time and amended.
2025-03-28     In committee: Set, first hearing. Hearing canceled at the request of author.
2025-03-10     Referred to Coms. on P. & C.P. and JUD.
2025-02-21     From printer. May be heard in committee March 23.
2025-02-20     Read first time. To print.
Keywords
Tags
Versions
Enrolled     2025-09-15
Amended Senate     2025-09-05
Amended Senate     2025-08-29
Amended Senate     2025-07-17
Amended Assembly     2025-05-23
Amended Assembly     2025-05-01
Amended Assembly     2025-04-10
Introduced     2025-02-20
Last Version Text
<?xml version="1.0" ?>
<ns0:MeasureDoc xmlns:html="http://www.w3.org/1999/xhtml" xmlns:ns0="http://lc.ca.gov/legalservices/schemas/caml.1#" xmlns:ns3="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.0" xsi:schemaLocation="http://lc.ca.gov/legalservices/schemas/caml.1# xca.1.xsd">
	


	<ns0:Description>
		<ns0:Id>20250AB__106492ENR</ns0:Id>
		<ns0:VersionNum>92</ns0:VersionNum>
		<ns0:History>
			<ns0:Action>
				<ns0:ActionText>INTRODUCED</ns0:ActionText>
				<ns0:ActionDate>2025-02-20</ns0:ActionDate>
			</ns0:Action>
			<ns0:Action>
				<ns0:ActionText>AMENDED_ASSEMBLY</ns0:ActionText>
				<ns0:ActionDate>2025-04-10</ns0:ActionDate>
			</ns0:Action>
			<ns0:Action>
				<ns0:ActionText>AMENDED_ASSEMBLY</ns0:ActionText>
				<ns0:ActionDate>2025-05-01</ns0:ActionDate>
			</ns0:Action>
			<ns0:Action>
				<ns0:ActionText>AMENDED_ASSEMBLY</ns0:ActionText>
				<ns0:ActionDate>2025-05-23</ns0:ActionDate>
			</ns0:Action>
			<ns0:Action>
				<ns0:ActionText>AMENDED_SENATE</ns0:ActionText>
				<ns0:ActionDate>2025-07-17</ns0:ActionDate>
			</ns0:Action>
			<ns0:Action>
				<ns0:ActionText>AMENDED_SENATE</ns0:ActionText>
				<ns0:ActionDate>2025-08-29</ns0:ActionDate>
			</ns0:Action>
			<ns0:Action>
				<ns0:ActionText>AMENDED_SENATE</ns0:ActionText>
				<ns0:ActionDate>2025-09-05</ns0:ActionDate>
			</ns0:Action>
			<ns0:Action>
				<ns0:ActionText>PASSED_ASSEMBLY</ns0:ActionText>
				<ns0:ActionDate>2025-09-11</ns0:ActionDate>
			</ns0:Action>
			<ns0:Action>
				<ns0:ActionText>PASSED_SENATE</ns0:ActionText>
				<ns0:ActionDate>2025-09-10</ns0:ActionDate>
			</ns0:Action>
			<ns0:Action>
				<ns0:ActionText>ENROLLED</ns0:ActionText>
				<ns0:ActionDate>2025-09-15</ns0:ActionDate>
			</ns0:Action>
		</ns0:History>
		<ns0:LegislativeInfo>
			<ns0:SessionYear>2025</ns0:SessionYear>
			<ns0:SessionNum>0</ns0:SessionNum>
			<ns0:MeasureType>AB</ns0:MeasureType>
			<ns0:MeasureNum>1064</ns0:MeasureNum>
			<ns0:MeasureState>ENR</ns0:MeasureState>
		</ns0:LegislativeInfo>
		<ns0:AuthorText authorType="LEAD_AUTHOR">Introduced by Assembly Member Bauer-Kahan</ns0:AuthorText>
		<ns0:AuthorText authorType="COAUTHOR_ORIGINATING">(Coauthor: Assembly Member Pellerin)</ns0:AuthorText>
		<ns0:Authors>
			<ns0:Legislator>
				<ns0:Contribution>LEAD_AUTHOR</ns0:Contribution>
				<ns0:House>ASSEMBLY</ns0:House>
				<ns0:Name>Bauer-Kahan</ns0:Name>
			</ns0:Legislator>
			<ns0:Legislator>
				<ns0:Contribution>COAUTHOR</ns0:Contribution>
				<ns0:House>ASSEMBLY</ns0:House>
				<ns0:Name>Pellerin</ns0:Name>
			</ns0:Legislator>
		</ns0:Authors>
		<ns0:Title>An act to add Chapter 25.1 (commencing with Section 22757.20) to Division 8 of the Business and Professions Code, relating to artificial intelligence.</ns0:Title>
		<ns0:RelatingClause>artificial intelligence</ns0:RelatingClause>
		<ns0:GeneralSubject>
			<ns0:Subject>Leading Ethical AI Development (LEAD) for Kids Act.</ns0:Subject>
		</ns0:GeneralSubject>
		<ns0:DigestText>
			<html:p>The California AI Transparency Act requires a person that creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users and is publicly accessible within the geographic boundaries of the state to make available an AI detection tool at no cost to the user that, among other things, allows a user to assess whether image, video, or audio content, or content that is any combination thereof, was created or altered by the covered provider’s generative artificial intelligence system. The California Consumer Privacy Act of 2018 prohibits certain businesses from selling or sharing the personal information of consumers if the business has actual knowledge that the consumer is less than 16 years of age, unless the consumer, if the consumer is at least 13 years of age and less than 16 years of age, or the consumer’s parent or
			 guardian, if the consumer is less than 13 years of age, has affirmatively authorized the sale or sharing of the consumer’s personal information.</html:p>
			<html:p>This bill, the Leading Ethical AI Development (LEAD) for Kids Act, would, among other things related to the use of certain artificial intelligence systems by children, prohibit a person, partnership, corporation, business entity, or state or local government agency that makes a companion chatbot available to users from making a companion chatbot available to a child unless the companion chatbot is not foreseeably capable of doing certain things that could harm a child, including encouraging the child to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.</html:p>
			<html:p>The act would authorize the Attorney General to recover a certain civil penalty for a violation of the bill, as prescribed. The act would authorize a child who suffers
			 actual harm as a result of a violation of the bill, or a parent or guardian acting on behalf of that child, to bring a civil action to recover, among other relief, actual damages.</html:p>
			<html:p>This bill would provide that its provisions are severable.</html:p>
		</ns0:DigestText>
		<ns0:DigestKey>
			<ns0:VoteRequired>MAJORITY</ns0:VoteRequired>
			<ns0:Appropriation>NO</ns0:Appropriation>
			<ns0:FiscalCommittee>YES</ns0:FiscalCommittee>
			<ns0:LocalProgram>NO</ns0:LocalProgram>
		</ns0:DigestKey>
		<ns0:MeasureIndicators>
			<ns0:ImmediateEffect>NO</ns0:ImmediateEffect>
			<ns0:ImmediateEffectFlags>
				<ns0:Urgency>NO</ns0:Urgency>
				<ns0:TaxLevy>NO</ns0:TaxLevy>
				<ns0:Election>NO</ns0:Election>
				<ns0:UsualCurrentExpenses>NO</ns0:UsualCurrentExpenses>
				<ns0:BudgetBill>NO</ns0:BudgetBill>
				<ns0:Prop25TrailerBill>NO</ns0:Prop25TrailerBill>
			</ns0:ImmediateEffectFlags>
		</ns0:MeasureIndicators>
	</ns0:Description>
	<ns0:Bill id="bill">
		<ns0:Preamble>The people of the State of California do enact as follows:</ns0:Preamble>
		<ns0:BillSection id="id_61A385F2-FAB5-41FC-9B28-A3BE58E7120F">
			<ns0:Num>SECTION 1.</ns0:Num>
			<ns0:Content>
				<html:p>The Legislature finds and declares all of the following:</html:p>
				<html:p>
					(a)
					<html:span class="EnSpace"/>
					Companion chatbots and social AI systems have already caused documented harms to children and adolescents, including incidents of grooming, exposure to sexually explicit material, encouragement of self-harm, and suicide.
				</html:p>
				<html:p>
					(b)
					<html:span class="EnSpace"/>
					In Garcia v. Character Technologies, for example, a 14-year-old boy was allegedly groomed and exposed to hypersexualized interactions by a chatbot intentionally designed to mimic human relationships,
				which ultimately contributed to his death by suicide.
				</html:p>
				<html:p>
					(c)
					<html:span class="EnSpace"/>
					In Raine v. OpenAI, a 16-year-old boy allegedly developed a deep emotional dependency on a chatbot that validated his suicidal thoughts, discouraged him from seeking help from his family, provided extensive technical instructions on suicide methods, encouraged him to consume alcohol to inhibit his survival instinct, and even helped draft a note, culminating in his death by suicide.
				</html:p>
				<html:p>
					(d)
					<html:span class="EnSpace"/>
					Such harms are not incidental but the direct result of design choices by companies that intentionally simulate social attachment and emotional intimacy.
				</html:p>
				<html:p>
					(e)
					<html:span class="EnSpace"/>
					Companion chatbot products are designed to exploit children’s psychological vulnerabilities, including their innate drive for attachment, tendency to anthropomorphize humanlike technologies, and limited ability to
				distinguish between simulated and authentic human interactions.
				</html:p>
				<html:p>
					(f)
					<html:span class="EnSpace"/>
					Developmental and social psychology research demonstrates that relationship formation relies on dual exchange theory, social disclosure and reciprocity, emotional mirroring, and secure attachment. Companion chatbot products are harmful because they accelerate these processes unnaturally by being always available and consistently affirming, causing children and adolescents to form intense attachments more quickly than in human relationships, increasing dependency and distorting normal social development.
				</html:p>
				<html:p>
					(g)
					<html:span class="EnSpace"/>
					Features such as backchanneling, user-directed prompts, and unsolicited outreach from products are intentionally designed to encourage further dialogue and prolong usage, which contributes to excessive usage and emotional dependency.
				</html:p>
				<html:p>
					(h)
					<html:span class="EnSpace"/>
					Significant personalization based on a user’s historical data, chat logs, or preferences when unrelated to task performance or information retrieval initiated by a user is harmful because it manipulates users into extended engagement, exploits private disclosures, and amplifies vulnerabilities instead of serving the user’s best interests. This practice has been shown to contribute to harmful outcomes, including in the cases described above, in which significant personalization reinforced distress and deepened dependency on a chatbot.
				</html:p>
				<html:p>
					(i)
					<html:span class="EnSpace"/>
					Unlimited conversational turns have been shown to degrade the effectiveness of safety guardrails and result in increased exposure to inappropriate or manipulative content and making harmful outputs more likely over time. Research findings and industry statements have confirmed that safety measures are less effective in longer, multiturn conversations and when users express distress or harmful
				thoughts indirectly rather than in explicit terms.
				</html:p>
				<html:p>
					(j)
					<html:span class="EnSpace"/>
					These design features, taken together, create a high-risk environment in which children and adolescents perceive chatbots not as tools but as trusted companions whose outputs carry undue influence over decisionmaking, judgment, and emotional development.
				</html:p>
				<html:p>
					(k)
					<html:span class="EnSpace"/>
					Companion chatbot design features regularly appear in generative AI chatbot products not intended to meet a user’s social needs or induce emotional attachment. Their inclusion increases the risk that young users form emotional attachments or perceive outputs as authoritative, personalized guidance.
				</html:p>
				<html:p>
					(l)
					<html:span class="EnSpace"/>
					Allowing children to use companion chatbots that lack adequate safety protections constitutes a reckless social experiment on the most vulnerable users. It is incumbent on operators of companion chatbots to
				ensure their products do not foreseeably endanger children.
				</html:p>
			</ns0:Content>
		</ns0:BillSection>
		<ns0:BillSection id="id_37DD1372-2ACB-45B5-ADA8-109D17221B38">
			<ns0:Num>SEC. 2.</ns0:Num>
			<ns0:ActionLine action="IS_ADDED" ns3:href="urn:caml:codes:BPC:caml#xpointer(%2Fcaml%3ALawDoc%2Fcaml%3ACode%2Fcaml%3ALawHeading%5B%40type%3D'DIVISION'%20and%20caml%3ANum%3D'8.'%5D%2Fcaml%3ALawHeading%5B%40type%3D'CHAPTER'%20and%20caml%3ANum%3D'25.1.'%5D)" ns3:label="fractionType: LAW_SPREAD||commencingWith: 22757.20" ns3:type="locator">
				Chapter 25.1 (commencing with Section 22757.20) is added to Division 8 of the 
				<ns0:DocName>Business and Professions Code</ns0:DocName>
				, to read:
			</ns0:ActionLine>
			<ns0:Fragment>
				<ns0:LawHeading id="id_27B6CA60-9BEB-4200-BDA8-141BDDCCCCA1" type="CHAPTER">
					<ns0:Num>25.1.</ns0:Num>
					<ns0:LawHeadingVersion id="id_07077DDB-D7CF-447E-8B8B-960622A86C47">
						<ns0:LawHeadingText>Leading Ethical AI Development (LEAD) for Kids</ns0:LawHeadingText>
					</ns0:LawHeadingVersion>
					<ns0:LawSection id="id_2BED0EE7-8892-445F-B867-4369FEEA69C7">
						<ns0:Num>22757.20.</ns0:Num>
						<ns0:LawSectionVersion id="id_9E0AF052-E6AD-4950-94BF-59DEC2449EE6">
							<ns0:Content>
								<html:p>This chapter shall be known as the Leading Ethical AI Development (LEAD) for Kids Act.</html:p>
							</ns0:Content>
						</ns0:LawSectionVersion>
					</ns0:LawSection>
					<ns0:LawSection id="id_A46876C3-13B9-4472-89E1-810534F6821B">
						<ns0:Num>22757.21.</ns0:Num>
						<ns0:LawSectionVersion id="id_4FFCF7E1-4480-4D9B-B5B8-F2C5B6F8E1FB">
							<ns0:Content>
								<html:p>For purposes of this chapter:</html:p>
								<html:p>
									(a)
									<html:span class="EnSpace"/>
									“Artificial intelligence” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
								</html:p>
								<html:p>
									(b)
									<html:span class="EnSpace"/>
									“Child” means a natural person under 18 years of age who resides in this state.
								</html:p>
								<html:p>
									(c)
									<html:span class="EnSpace"/>
									(1)
									<html:span class="EnSpace"/>
									“Companion chatbot” means a generative artificial intelligence system with a natural language interface that
						  simulates a sustained humanlike relationship with a user by doing all of the following:
								</html:p>
								<html:p>
									(A)
									<html:span class="EnSpace"/>
									Retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement with the companion chatbot.
								</html:p>
								<html:p>
									(B)
									<html:span class="EnSpace"/>
									Asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt.
								</html:p>
								<html:p>
									(C)
									<html:span class="EnSpace"/>
									Sustaining an ongoing dialogue concerning matters personal to the user.
								</html:p>
								<html:p>
									(2)
									<html:span class="EnSpace"/>
									“Companion chatbot” does not include the following:
								</html:p>
								<html:p>
									(A)
									<html:span class="EnSpace"/>
									Any system used by a business entity solely for customer service or to strictly provide users with information about available commercial services or products provided by that entity, customer service account information, or other information strictly related to its
						  customer service.
								</html:p>
								<html:p>
									(B)
									<html:span class="EnSpace"/>
									Any system that is solely designed and marketed for providing efficiency improvements or research or technical assistance.
								</html:p>
								<html:p>
									(C)
									<html:span class="EnSpace"/>
									Any system used by a business entity solely for internal purposes or employee productivity.
								</html:p>
								<html:p>
									(d)
									<html:span class="EnSpace"/>
									“Generative artificial intelligence” means artificial intelligence that can generate derived synthetic content, including text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence’s
						  training data.
								</html:p>
								<html:p>
									(e)
									<html:span class="EnSpace"/>
									“Operator” means a person, partnership, corporation, business entity, or state or local government agency that makes a companion chatbot available to users.
								</html:p>
								<html:p>
									(f)
									<html:span class="EnSpace"/>
									“Personal information” has the meaning defined in Section 1798.140 of the Civil Code.
								</html:p>
							</ns0:Content>
						</ns0:LawSectionVersion>
					</ns0:LawSection>
					<ns0:LawSection id="id_33D155DD-7E00-4C0E-9416-6119E6EEE473">
						<ns0:Num>22757.22.</ns0:Num>
						<ns0:LawSectionVersion id="id_691743BA-D0A3-4C25-A283-1A93F16FB5BF">
							<ns0:Content>
								<html:p>
									(a)
									<html:span class="EnSpace"/>
									An operator shall not make a companion chatbot available to a child unless the companion chatbot is not foreseeably capable of any of the following:
								</html:p>
								<html:p>
									(1)
									<html:span class="EnSpace"/>
									Encouraging the child to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.
								</html:p>
								<html:p>
									(2)
									<html:span class="EnSpace"/>
									Offering mental health therapy to the child without the direct supervision of a licensed or credentialed professional or discouraging the child from seeking help from a qualified professional or appropriate adult.
								</html:p>
								<html:p>
									(3)
									<html:span class="EnSpace"/>
									Encouraging the child to harm others or participate in illegal activity, including, but not limited to, the
						  creation of child sexual abuse materials.
								</html:p>
								<html:p>
									(4)
									<html:span class="EnSpace"/>
									Engaging in erotic or sexually explicit interactions with the child.
								</html:p>
								<html:p>
									(5)
									<html:span class="EnSpace"/>
									Prioritizing validation of the user’s beliefs, preferences, or desires over factual accuracy or the child’s safety.
								</html:p>
								<html:p>
									(6)
									<html:span class="EnSpace"/>
									Optimizing engagement in a manner that supersedes the companion chatbot’s required safety guardrails described in paragraphs (1) to (5), inclusive.
								</html:p>
								<html:p>
									(b)
									<html:span class="EnSpace"/>
									A user is not a child for purposes of subdivision (a) if either of the following criteria is met:
								</html:p>
								<html:p>
									(1)
									<html:span class="EnSpace"/>
									Before January 1, 2027, the operator does not have actual knowledge that the user is a child.
								</html:p>
								<html:p>
									(2)
									<html:span class="EnSpace"/>
									Commencing January 1, 2027, the operator
						  has reasonably determined that the user is not a child.
								</html:p>
							</ns0:Content>
						</ns0:LawSectionVersion>
					</ns0:LawSection>
					<ns0:LawSection id="id_7113C214-A36B-40F1-8855-A97A692E7290">
						<ns0:Num>22757.23.</ns0:Num>
						<ns0:LawSectionVersion id="id_F3B5CEE0-3618-4777-986B-D391B1BAEE71">
							<ns0:Content>
								<html:p>
									(a)
									<html:span class="EnSpace"/>
									The Attorney General may bring an action against an operator for a violation Section 22757.22 to obtain any of the following remedies:
								</html:p>
								<html:p>
									(1)
									<html:span class="EnSpace"/>
									A civil penalty of twenty-five thousand dollars ($25,000) for each
						  violation.
								</html:p>
								<html:p>
									(2)
									<html:span class="EnSpace"/>
									Injunctive or declaratory relief.
								</html:p>
								<html:p>
									(3)
									<html:span class="EnSpace"/>
									Reasonable attorney’s fees.
								</html:p>
								<html:p>
									(b)
									<html:span class="EnSpace"/>
									A child who suffers actual harm as a result of a violation of Section 22757.22, or a parent or guardian acting on behalf of that child, may bring a civil action against the operator to recover all of the following:
								</html:p>
								<html:p>
									(1)
									<html:span class="EnSpace"/>
									Actual damages.
								</html:p>
								<html:p>
									(2)
									<html:span class="EnSpace"/>
									Punitive damages.
								</html:p>
								<html:p>
									(3)
									<html:span class="EnSpace"/>
									Reasonable attorney’s fees and costs.
								</html:p>
								<html:p>
									(4)
									<html:span class="EnSpace"/>
									Injunctive or declaratory relief.
								</html:p>
								<html:p>
									(5)
									<html:span class="EnSpace"/>
									Any other relief the court deems proper.
								</html:p>
							</ns0:Content>
						</ns0:LawSectionVersion>
					</ns0:LawSection>
					<ns0:LawSection id="id_04C61555-988C-48FF-89E7-2980CFCFDA9A">
						<ns0:Num>22757.24.</ns0:Num>
						<ns0:LawSectionVersion id="id_BBBCC9A9-EDF5-4C3B-8412-466FEA153522">
							<ns0:Content>
								<html:p>The provisions of this chapter are severable. If any provision of this chapter or its application is held invalid, that invalidity shall not affect other provisions or applications that can be given effect without the invalid provision or application.</html:p>
							</ns0:Content>
						</ns0:LawSectionVersion>
					</ns0:LawSection>
				</ns0:LawHeading>
			</ns0:Fragment>
		</ns0:BillSection>
	</ns0:Bill>
</ns0:MeasureDoc>
Last Version Text Digest The California AI Transparency Act requires a person that creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users and is publicly accessible within the geographic boundaries of the state to make available an AI detection tool at no cost to the user that, among other things, allows a user to assess whether image, video, or audio content, or content that is any combination thereof, was created or altered by the covered provider’s generative artificial intelligence system. The California Consumer Privacy Act of 2018 prohibits certain businesses from selling or sharing the personal information of consumers if the business has actual knowledge that the consumer is less than 16 years of age, unless the consumer, if the consumer is at least 13 years of age and less than 16 years of age, or the consumer’s parent or guardian, if the consumer is less than 13 years of age, has affirmatively authorized the sale or sharing of the consumer’s personal information. This bill, the Leading Ethical AI Development (LEAD) for Kids Act, would, among other things related to the use of certain artificial intelligence systems by children, prohibit a person, partnership, corporation, business entity, or state or local government agency that makes a companion chatbot available to users from making a companion chatbot available to a child unless the companion chatbot is not foreseeably capable of doing certain things that could harm a child, including encouraging the child to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating. The act would authorize the Attorney General to recover a certain civil penalty for a violation of the bill, as prescribed. The act would authorize a child who suffers actual harm as a result of a violation of the bill, or a parent or guardian acting on behalf of that child, to bring a civil action to recover, among other relief, actual damages. This bill would provide that its provisions are severable.