Should parents be protecting their children from AI?

Dr Mia Eisenstadt
3 min readAug 13, 2023

One of the burning questions we’ve had raised by a dad on the fatherli parenting app is how to talk about AI Chatbots to your kids and protect them from harm (as well as all the cool and incredible things AI can do). The answer is not straightforward. This has come to the fore recently with young people using Chatbots to help them to plan anorexic meal plans.

As a researcher and person working in tech, I’m a big fan of AI Chatbots and the current explosion of AI tools that can help with dyslexia and learning difficulties, and recently, massive advances in medical imaging, breast cancer screening and diabetes, heart failure prediction and predicting the numbers of hospital beds needed.

At the same time, the speed at which new products are created and scaled means that safety and user data protection aren’t always baked in (see below). A recent Washington Post article “AI is acting ‘pro-anorexia’ and tech companies aren’t stopping it” features how, when it comes to health and dieting advice, Bard and some other AI assistants can give dangerous advice that promote eating disorder behaviours (such as an anorexia meal plan) or creates images (as the advice is gleaned from what already exists on the internet and some of this is harmful content).

Garbage in, garbage out- is dangerous when it comes to health advice

With large language models, or research in general, there is the principle of “garbage in, garbage out” (GIGO). Using this principle, if the AI Chatbot takes poor quality advice from the net (e.g. pro-anorexia tips in the corners of the internet) and then packages it for the user, the personalised advice will be bad and promote disordered eating (see the article for details). If the data can be checked or is drawn from a particular knowledge domain, then the resulting response from the Chatbot can be high quality.

The internet is already unregulated for harmful content

However, what this important Washington Post article doesn’t weigh up is that the AI chatbot is taking harmful advice that already exists on the internet forums anyway, and so, in many ways, this is amplification of an existing problem of harmful content on the internet (e.g. the many anorexia forums) that already a threat to adolescent and adults’ health (that is already unregulated).

So, first of all, I can’t give a high-quality, evidence-based answer as the research isn’t there yet. However, as a parent and researcher, I think it’s safe to say it’s important to be aware that at this early stage the quality of responses of Chatbots are variable and, that Chatbots are not free from existing risk of harmful content risks on the internet.

Parents, have you come across ChatGPT jailbreaking? You’re kids might have.

It’s also can be helpful for parents to be aware of jailbreaking. ChatGPT jailbreaking is an act utilized to remove restrictions and limitations from ChatGPT. It’s often certain prompts that override existing safeguards. I’d personally be careful about children and adolescents overriding the safeguarding on AI assistants or using them to plan diets.

Over time (and likely at a fast pace) based on the rapid AI developments within 2023, I imagine a quality or harmful information check will be built into AI chatbots and many AI companies and the US Government Model Forum are taking ethics very seriously and are working on this issue already.

However, there are clear dangers to young people that are immediate, as well as the long term risks of AI (see this Scientific American article for an overview).

Additional thoughts to protect young people

This blog came from a LinkedIn post which received a number of comments. One of the comments suggested that in addition to prompt engingeering, users of AI may need training to be discerning in the quality of the posts that are created by Chatbots. I think this is an excellent proposal and being critical and evaluating the quality of chatbots as a user is super important and a key skill to teach children and young people.

--

--

Dr Mia Eisenstadt

Specialising in child and family wellbeing and mental health Instagram: mia_psychologyandwellbeing