Can AI improve family life?

The public debate about the future of artificial intelligence (AI) often focusses on two main concerns: the technology’s broader impact on humanity, and its immediate effects on individuals. For the most part, people want to know how automation will transform work. Which industries will still be around tomorrow? And whose job is on the line today?

But the debate has overlooked an important pillar of society: the family. If we are going to build AI systems that will help solve, rather than exacerbate, pressing social and economic problems, we should remember that families comprise 89% of American households, and we should consider the complex pressures they face when deciding how to apply the technology.

After all, families in the US are in desperate need of support. According to the World Economic Forum, America’s US$6 trillion care economy is at risk of collapsing, owing to labour shortages, administrative burdens, and a broken market model whereby most families cannot afford the full cost of care and workers are chronically underpaid.

Moreover, parenthood has changed: more parents are working, and demands on their time, from caring for children and ageing parents to managing information overload and coordinating household tasks, have intensified.

Using AI as a co-pilot for families could save time – and sanity. An AI assistant could decipher school emails and activity schedules or help prepare for an upcoming family trip by making a packing list and confirming travel plans.

If augmented by AI, the care robots being developed in Japan and elsewhere could support the privacy and autonomy of those receiving care and enable human caregivers to spend more time establishing emotional connections and providing companionship.

Designing AI to assist with complex human problems such as parenting or elder care requires defining its role. In today’s world, caregiving, and especially parenting, consists of too many mundane tasks that eat into the time available for more meaningful activities.

AI could thus function as “anti-tech tech” – a shield from the always-on culture of email, text messages, and endless to-dos. The ideal AI co-pilot would shoulder the bulk of this busywork, allowing families to spend more time together.

But complex human tasks are typically “iceberg” problems, with the majority of the work hidden beneath the surface. An AI co-pilot that handles only the visible labour would do little to alleviate the caregiver’s burden, because completing these tasks requires a full understanding of what needs to be done.

For example, we can build the technology to create calendar entries from an email with the schedule for a youth soccer team (and then delete and recreate them when it inevitably changes a week later).

But to free a parent from the invisible load of managing a kid’s sports season, AI would need to understand the various other tasks that lie beneath the surface: looking for field locations, noting jersey colours, signing up for snack duties, and creating the appropriate reminders.

If one parent had a scheduling conflict, the AI assistant would have to alert the other parent, and if both had conflicts, it would have to schedule time for a conversation, in recognition of how important it can be for a child to have a parent or loved one at their game.

The challenge is not coming up with an answer, but rather coming up with the right answer given the complex context, much of which is embedded in parents’ brains. Through careful exploration and curation, this knowledge could one day be converted into data for training specialised family AI models. By contrast, large language models such as ChatGPT-4, Gemini, and Claude are generally trained on public data collected from the internet.

Developing an AI co-pilot for caregivers would undoubtedly test the technology’s technical limits and determine the extent to which it can account for moral considerations and societal values.

In a forthcoming paper titled “Computational Frameworks for Care and Caregiving Frameworks for Computing,” the experimental psychologist Brian Christian explores some of the biggest challenges of trying to translate care into the mathematical “reward functions” necessary for machine learning.

One example is when a caregiver intervenes on the basis of what they believe to be in a child’s best interests, even if that child disagrees. Christian concludes that “the process of trying to formalise core aspects of the human experience is revealing to us what care really is – and perhaps even how much we have yet to understand about it”.

Like office work, much of family life consists of repetitive and mundane tasks that could be completed by AI. But unlike office work, training such an AI model would require carefully collecting and transmitting the specialised practices of an intimate world.

It is worth the effort, though: an AI assistant for caregivers would free up time and energy for empathy, creativity, and connection. More importantly, identifying which parts of caregiving can be performed by AI is likely to teach us a great deal about which family functions and activities should remain fully and solely human.

Anne-Marie Slaughter, a former director of policy planning in the US state department, is CEO of the think tank New America, and professor emerita of politics and international affairs at Princeton University.

Avni Patel Thompson is founder and CEO, most recently, of Milo, an AI-powered partner that helps parents tackle the invisible load of managing a family.

The views expressed are those of the writers and do not necessarily reflect those of FMT.


Leave a Reply

Your email address will not be published. Required fields are marked *