leidenlawblog

Growing up with AI Photo by Andy Kelly on Unsplash

Growing up with AI

UNICEF released a policy guidance draft on AI for children. This piece comments on it and stresses the need for a more precautionary and children-centred approach to such policy discussion.

Introduction

The UNICEF Office of Global Insight and Policy released its Policy guidance on AI for children draft 1.0 in September 2020 and initiated an open consultation. It is important that attention is paid to the impact of using AI on children and we welcome the policy guidance as a prelude to further discussion. Our comments are intended to contribute to that discussion. Here we set out the policy guidance goals and provide our response to the document.

Policy guidance on AI for children

Governments and corporations are increasingly using and deploying Artificial intelligence (AI) geared towards advancing personalised learning, improving healthcare, enhancing social services provision, and helping achieve the United Nations Sustainable Development Goals. However, while AI is an innovation driving force, it also challenges children's rights, including privacy, safety, and security. Since the leading policymakers and governments are discussing AI policies, UNICEF believes that there is a window of opportunity to bring ethical and policy safeguards for children to the fore. Together with the Government of Finland, UNICEF has developed draft policy guidance to promote children's rights in AI policies and raise awareness of how AI systems can uphold or undermine children's rights. According to UNICEF, ‘the ultimate purpose of the guidance is to aid the protection and empowerment of children in interactions with AI systems and enable access to its benefits in all aspects of life’.

The consultation & our response

The open consultation was conducted via an online form available to the public. Below are some of the points that we raised in our response.

Opportunities for children from AI are unclear

We have concerns considering some of the remarks and principles put forward in the document, especially on the key opportunities AI presents for children and on how to ensure inclusion of children.

Remark 1: ‘Aid children's education and development.’ Online education is exhausting. After teaching online for many months during the coronavirus outbreak, we acknowledge that we can keep education going thanks to new technologies, but it is certainly very different. Too much focus on AI in education risks overlooking the dangers of fully digital environments linked to more solitude and alienation. We do not deny that AI can have benefits in education, such as personalised learning, but this is not self-evident.

Remark 2: ‘Contribute to better health outcomes for children.’ Although this section is promising, it barely refers to children. For instance, much more could have been said about children on the autism spectrum disorder or in terms of access. Better health results cannot happen at the expense of other rights. The child's dignity and other rights need to be taken into consideration, avoiding false dichotomies of less privacy for better health results, as the text seems to imply.

Principle 3: ‘Ensure inclusion of and for children.’ The document seems to highlight the importance of inclusive design strategies when stating that ‘all children should be empowered by AI and play a leading role in designing a responsible digital future for all’. However, critical questions as to how and to what extent the input of children is necessary or desirable to 'design a responsible digital future for all' need to be answered. Children are not often in a position to understand the magnitude of the problems or risks. Also, parents are sometimes oblivious to those risks as they are often too emotionally invested in seeing technology as a solution for their child and the technology is often too complex to appreciate its use and consequences.

Clear children-oriented examples are missing

There is no mention of clear examples of assistive technologies for children. Interesting examples are robots that help children perform physical tasks, such as paediatric exoskeletons that help them learn how to walk or feeding assistants to help them during mealtimes. Yet, access to these technologies is not widely discussed.

Inappropriate formulation of principles

As we have seen, the policy guidance formulates a number of guiding principles, but the formulation of these principles remains very vague. For instance, the caption of the requirement ‘Prioritize fairness and non-discrimination for children’ is inappropriate. The caption reads: ‘AI must be for all children.’ Non-discrimination is not about AI being for all children, but about children not being excluded or treated differently on unfair grounds. Moreover, inclusiveness and diversity principles may also require different adoption and access strategies for children. The elaboration at the end of the report is much more revealing, but still far away from providing practical guidance to stakeholders.

Principle 2: ‘Ensure inclusion of and for children’ seems to highlight the importance of inclusive design strategies when stating that ‘all children should be empowered by AI and play a leading role in designing a responsible digital future for all’. However, critical questions of how and to what extent input from children is necessary or desirable to 'design a responsible digital future for all' need to be answered. Children cannot assess the problems or risks of robots properly. Parents may also only see the benefits technology has to offer, overlooking its potential downsides.

Principle 8: ‘Prepare children for present and future developments in AI’ presents AI as something that is inevitable, as if alternatives do not exist or are not even preferable. In contrast, existing research on the impact technology has on humans (and on child development) highlights the importance of finding alternatives to screen-based and other technology. This requirement should be redirected to designers and governments to identify the alternatives to using all these technologies. A good starting point is to ask yourself what problem needs to be solved and whether technology (and which technology) is the solution.

Missing requirements and core principles

There is no requirement to access the technologies that represent an advancement to children’s wellbeing. For instance, paediatric access to exoskeletons lags far behind that of adults. In a recent article, we query the possible reasons for this variability in access, explicitly focusing on children, who constitute a categorically vulnerable population, and also stand to benefit significantly from the use of this technology at this critical point in their physical and emotional growth.

There is no reference to the dignity of children and available alternatives. The growing body of literature that points to the adverse effects of technology is not reflected in the document. However, we see growing attention to non-technological alternatives, usually more organic and based on human touch, available for the privileged and not for the mass. The mass, on the contrary, seems to be forced to use technology as the only solution without much say. The result is a growing divide in and among societies that is likely to also affect children.

Recommendations for the UNICEF Policy Guidance

The policy guidance mentions all kinds of benefits and opportunities of AI for children. However, these benefits and opportunities are not going to be equal for all children. Moreover, the examples of opportunities also include risks that are not mentioned in the policy guidance. A more nuanced picture would have been appropriate.

The practical implementation of requirements will not be easy. This document seems more of a policy document or a first exploration of the topic, perhaps. However, it is far from the practical guidance that it aims to be in the development and implementation of AI for children. More research is needed to understand how these principles apply in practice and how stakeholders and their interests will differ, depending for example on whether they are States or private corporations.

The assessment of the development and use of AI considers only risks and opportunities. What is needed is a more comprehensive picture of the social and economic context in which these technologies are developed. Context strongly influences the development and use of AI and any related development. In this respect, the policy guidance document should link the development and use of AI to broader discourses on the impact of surveillance (both by State and private corporations) and economic exploitation of children for example. So, besides a discussion of rights (which admittedly is extremely important), it should also be a discussion of the extent of the tech industry's power on children’s lives and well-being.

When it comes to data protection, it is relevant that rights and principles are taken seriously and enforced. This would solve some of the complexity that results from datafication, given that data protection law promotes transparency, data minimisation (e.g. privacy by design principles), and accountability – certainly relevant for children but, of course, equally relevant for adults (many of whom are parents) for whom the data jungle is also incomprehensible.

Besides, where should we focus our attention when it comes to children? Many of the points mentioned apply equally to adults. Will we focus on children properly or just well enough when we start implementing fundamental rights more generally (which apply to both adults and children)? Or should explicit account be taken of children, given their – in many respects – vulnerable position? But then, what is this specific vulnerability when it comes to AI technology? Just like the policy guidance, we find a child-centred approach essential, an approach that can benefit us all. However, the policy guidance does not yet specifically state where children's vulnerabilities lie in AI applications. To give an example from the context of emotional AI: apart from the question of whether the automated reading of emotions is at all possible, it certainly does not seem very effective for children. AI may have a different impact on children than it has on adults, which could imply a different interpretation of the suitability and desirability of AI for children.

However, apart from noting the fact that children are still developing and require specific attention, the policy guidance document does not have much to offer. We believe that a more in-depth study is needed to determine how AI could harm (or benefit for that matter) children's well-being, both in physical and psychological terms, beyond security, privacy, and dignity. In this context, attention to implementing the precautionary principle in uncertainty regarding risks and actual harm is instrumental in achieving the goals of this policy guidance. It is not inconceivable – it can even be expected – that the consequences of using AI will only become clear much later.

The precautionary approach is tentatively mentioned in draft General Comment No. 25 'Children's rights in relation to the digital environment' by the UN Committee on the rights of the child, and this approach deserves further elaboration also in relation to AI technology. Such an approach already seems to be taken by the Council of Europe, which recommends banning automated profiling unless it is in the child's best interests or there is an overriding public interest (and appropriate safeguards). Also, concerning Article 22 of the GDPR (on the right not to be subject to automated decision-making and profiling), child-centred interpretations are emerging, putting the child's best interest and welfare at the forefront. In addition, precaution is an important part of impact assessments, such as the data protection impact assessment (Article 35 of the GDPR) and child impact assessment as designed by UNICEF (see further The Importance of Privacy by Design and Data Protection Impact Assessments in Strengthening Protection of Children's Personal Data Under the GDPR).

Wrapping up

The development of policies for AI and children must go beyond addressing the risks and stimulating opportunities. It should also stimulate discussion on whether and when AI contributes to children's lives, well-being, and health, and in which cases it is not a desirable development, either because it causes harm to children or because the possible negative consequences are still too uncertain.

The authors are associated with the Working group on the Law and Autonomous Systems (including AI) at the Center for Law and Digital Technologies, Institute for the Interdisciplinary Study of the Law, Leiden Law School, Leiden University, the Netherlands. More information: https://www.universiteitleiden.nl/en/law/institute-for-the-interdisciplinary-study-of-the-law/elaw.

1 Comment

Elroam

Interesting and important issue. Beyond very well known and universal issues have to do with security, privacy, data protection and so forth, worth to note, two major issues, especially concerning children:

And first, the "big brother" impact. Children, can be more influenced and affected by what surrounds them(generally speaking). Now, they may get used to the idea, that machines, are source of authority, having authoritative value or impact in life. This is very dangerous of course. The machine, "knows" intimate details about them. It does function as mentor or alike. Dictating actions and commands or alike. So, children grow with the idea or notion, that, machines are intimately connected to their lives. All this can create not only deficiency in human touch or contact, but, empowering machines in their eyes, and becoming so, oblivious to their privacy and intimacy of course, beyond the strict and narrow context of the interaction with the specific AI machine they practice with. For example, they may lack sense of criticism about, police or governance "big brother" tools implied on them (in the future let's say).

Second, there is that notion I call "virtual brain space". Means, every person, stores in his mind and sol, certain volume of virtual space. The latter consists on the sum let's say of: memories, experiences, imaginations, remote places (whether fictional or not) all sort of imaginary notions etc..... Suppose, one historian. He has bigger space. Why, he has learned history. He can and does travel back and forth, to ancient periods. Places. People of all sort. It does enrich and enlarge his "virtual brain space". Now:

Such AI, like other digital machines used, may narrow that space. Entrench it, or limit it, only to digital screens. But, may also, restrict their self creation. For, that space should be also made by self creation. Very critical to development of one child.

So, the principle should be clear:

Such machine should be secondary to it. Should help to enhance and develop, the cognitive capacity, the self creation, the imagination, the volume of virtual brain space, not to limit it, and entrench it in the limit of digital screens.

Thanks

Add a comment