Politics

/

ArcaMax

Commentary: AI can help fix what's broken in foster care

Maureen Flatley and Taylor Barkley, Tribune News Service on

Published in Op Eds

President Donald Trump's executive order directing states to deploy artificial intelligence in foster care isn't just welcome—it's overdue.

The provision calling for "predictive analytics and tools powered by artificial intelligence, to increase caregiver recruitment and retention rates, improve caregiver and child matching, and deploy Federal child-welfare funding to maximally effective purposes" addresses real failures in a system that desperately needs help.

The foster care system's problems aren't hypothetical. Caseworkers manage 24-31 families each, with supervisors overseeing hundreds of cases. Children wait years for permanent placements. Around 2,000 children die annually from abuse and neglect, with reporting gaps suggesting the real number is higher. Overburdened workers rely on limited information and gut instinct to make life-altering decisions. This isn't working.

AI offers something the current system lacks: the ability to process vast amounts of information to identify patterns human caseworkers simply cannot see. Research from Illinois demonstrates this potential. Predictive models can identify which youth are at highest risk of running away from foster placements within their first 90 days, enabling targeted interventions during a critical window. Systems can flag when residential care placement is likely, allowing caseworkers to connect families with intensive community-based services instead. These aren't marginal improvements—they represent the difference between crisis response and genuine prevention.

Critics worry AI will amplify existing biases in child welfare. This concern, while understandable, gets the analysis backwards. Human decision-making already produces deeply biased outcomes. Research presented by Dr. Rhema Vaithianathan, director of the Centre for Social Data Analytics at Auckland University of Technology and lead developer of the Allegheny County Family Screening Tool, revealed something crucial: when Black children scored as low-risk, they were still investigated more often than white children with similar scores. Subjective assessments by overwhelmed caseworkers operating without adequate information lead to inconsistent, sometimes discriminatory decisions. It exposed bias in human decision-making that the algorithm helped surface.

That's AI's real promise: transparency. Unlike the black box of human judgment, algorithmic decisions can be examined, tested, and corrected. AI makes bias visible and measurable, which is the first step to eliminating it.

None of this means AI deployment should be careless. The executive order's 180-day timeline is ambitious, and implementation must include essential safeguards:

Mandatory bias testing and regular audits should be standard for any AI system used in child welfare decisions. Algorithms must be continuously evaluated for disparate racial or ethnic impacts, with clear thresholds triggering review and correction.

Human oversight remains essential. AI should inform, not dictate, caseworker decisions. Training must emphasize that risk scores and recommendations are tools for professional judgment, not substitutes for it. Final decisions about family separation or child placement must rest with trained professionals who can consider context algorithms cannot capture.

 

Transparency requirements should apply to any vendor providing AI tools to child welfare agencies. Proprietary algorithms are fine for commercial applications, but decisions about children's lives demand explainability. Agencies must understand how systems reach conclusions and be able to articulate those rationales to families and courts.

Rigorous evaluation must accompany deployment. The order's proposed state-level scorecard should track not just overall outcomes but specifically whether AI tools reduce disparities or inadvertently increase them. Independent researchers should assess effectiveness, and agencies must be willing to suspend or modify systems that don't perform as intended.

The alternative to AI isn't some pristine system of perfectly unbiased human judgment—it's the status quo, where overwhelmed caseworkers make consequential decisions with inadequate information and no systematic oversight. Where children fall through cracks that better data analysis could have prevented. Where placement matches fail because no human could possibly process all relevant compatibility factors. Where preventable tragedies occur because risk factors weren't identified in time.

Implementation details matter enormously, and HHS must get them right. But the executive order's core insight is sound: AI and predictive analytics can transform foster care from a crisis-driven system to one that prevents harm before it occurs. The question isn't whether to deploy these tools, it's how to deploy them responsibly. With proper safeguards, AI can address the very problems critics fear it will create.

America's foster children deserve better than the status quo. AI gives us a path to deliver it.

____

Maureen Flatley is an expert in child welfare policy and has been an architect of a number of major child welfare reforms. She also serves as the President of Stop Child Predators. Taylor Barkley is Director of Public Policy at the Abundance Institute, focusing on technology policy and innovation.

_____


©2025 Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Daryl Cagle Phil Hands Adam Zyglis Lisa Benson A.F. Branco Jeff Koterba