In recent years, getting to know the people that work for you has certainly become easier. With the growth of social media, the quantified-self movement and ubiquitous video cameras, there is little left of anonymity. The robotisation of the workplace has widely exposed employees’ data. Added to that, the shift towards flexible, non-permanent and less certain employment practices, has certainly left modern employees in an unfavourable position.
The robotisation and datafication of work have lead to another important consequence for individual privacy. The private sphere and the sphere of work have started to overlap significantly. The same email address can be used for private and work-related communications. What someone posts on his Twitter timeline can be both a private matter but also information that concerns his work. There is no longer a clear line between what is office and what is home.
In an opinion issued in July, the Article 20 Working Party noted that these trends pose a challenge for the protection of employees’ data and privacy. Namely, the complex and opaque nature of digital technologies sometimes eludes strict data protection rules. As the Working Party puts clearly, this should not be tolerated. The processing of personal data, regardless of how technically developed or commercially necessary it is, has to comply with a number of data protection principles. With the adoption of the new EU General data protection regulation (GDPR), these principles have been strengthened and elaborated. As the issue of privacy is relevant for everyone, including students who are just about to enter the job market, this blog sheds light on the application of data protection principles in a robotised and data-driven work environment. To highlight some of the problems arising from it, three scenarios are here presented and analysed through the GDPR lens.
II. Three scenarios
- Scenario 1 – chip implants
As the New York Times reports, employees at Three Square Market, a technology company in Wisconsin, can choose to have a chip the size of a grain of rice injected between their thumb and index finger. Once that has been done, any task involving RFID technology (a technology that incorporates the use of electromagnetic or electrostatic coupling in the radio frequency portion of the electromagnetic spectrum to uniquely identify an object, animal, or person) — swiping into the office building, paying for food in the cafeteria — can be accomplished with a wave of the hand. The chip cannot be removed without medical intervention and therefore not only workplace-related moves can be tracked, but basically any other move throughout the day, including, for instance, night time toilet visits. Although the technology is highly invasive, it is likely that it will soon be widely used, in particular in tech-savvy and less privacy prone environments, such as start-up incubators.
- Scenario 2 – assessing workers by using algorithms
In recent years, AI (algorithmic intelligence) tools have become increasingly popular for assessing workers and ranking them from most to the least capable. Those at the bottom of the list risk being fired. Cathy O’Neill wrote about US teachers who were evaluated, and many of them dismissed, based on an AI rating score. It then turned out that the algorithmic decision-making used flawed metrics and that the teachers who were dismissed were actually doing just fine. The AI tool evaluated teachers only on the basis of students’ test scores, while ignoring how much the teachers engaged the students, worked on specific skills, dealt with classroom management, or helped students with personal and family problems.
- Scenario 3 – how Uber manipulates with data
A case study described by the authors of AI Now Institute demonstrated how employers not only monitor, but also manipulate workers by using big data. Uber is a ride-sharing platform. For commercial reasons, the company wants to maintain the number of available cars, even during times of low demand when drivers make less money. To address this, the company drew on behavioural economic research about the psychological tendency of taxi workers to set earnings goals and stop working once they reach them. Uber discovered that drivers quickly abandon mental income targets in favour of working at times of high demand. To combat this tendency, Uber sent tailored nudge messages to drivers, indicating when they were close to revenue targets, during times when it was advantageous for Uber to keep its drivers on the road. This was only made possible by the analysis of big data, being instantly collected from all the drivers’ mobile devices.
III. Legal discussion
The processing of personal data, regardless of how technically developed or commercially needed it is, has to comply with a number of data protection principles. With the adoption of the new EU General data protection regulation (GDPR), these principles have been strengthened and elaborated. In what follows, three of them will be discussed in more detail: the principle of lawfulness (Article 6 of the GDPR), the principle of fairness (Article 5(1)(a) of the GDPR) and the principle of purpose limitation (Article 5(1)(b) of the GDPR).
The principle of lawfulness
This principle provides that all data processing should be based on a solid legal basis. The GDPR gives several options: consent, contract, legitimate interest, legal provision, vital interest and public interest. In the context of employment, the first three are specifically relevant. Recently, there has been a trend to move from the legal bases of consent and contract to the legal basis of legitimate interest. This is understandable. Due to strict conditions in relation to consent, it is often impossible for employers to use it as a legal basis for certain data-driven activities. However, legitimate interest as a legal basis can be a dangerous alternative, as it rules out data subject control and requires a balancing of rights, which is never clear cut.
The Working Party insists that the legitimate interest of employers can be invoked as a legal ground only if the processing is strictly necessary for a legitimate purpose, if the processing complies with the principles of proportionality and subsidiarity, and if the proportionality test is conducted prior to the deployment of any monitoring tool. This may be a pain in the neck for employers who want to introduce technical developments that cannot be described as strictly necessary. For example, how would an employer argue that inserting chips in employees’ hands (Scenario 1) is an urgent measure?
- The principle of purpose limitation
The principle of purpose limitation provides that data must be collected for a specified, explicit and legitimate purpose and must not be further processed in a way that is incompatible with those purposes.
One important point to add is that, in order to be compatible, the purpose of data processing should remain within a data subject’s reasonable expectations.
The increase in the amount of data generated in the workplace environment, in combination with new techniques for data analysis and cross-matching, may also create the risk of incompatible further processing.
In scenario 3, a vast amount of drivers’ data was generated, which gave Uber the opportunity to conduct an analysis and use it to nudge the drivers. Even though Uber drivers were aware of the fact that they were being monitored, they did not know that their data was used in order to keep them on the road for longer periods of time. Had they been aware of this, it is very likely that they would not have approved of it, as it definitely exceeded the boundaries of their privacy expectations.
- The principle of fairness
Fairness is defined in the Oxford English Dictionary as ‘Impartial and just treatment or behaviour without favouritism or discrimination.’ In the data protection context, however, fairness as a notion remains undefined. Through the GDPR lens, the principle of fairness could be analysed from an internal (the GDPR provisions) and external (provisions that refer to human rights) angle. The former, which is of special interest for our topic, has a strong connection with the provision on transparency.
According to the Working Party, the transparency requirements of Articles 10 and 11 apply to data processing at work; employees must be informed of the existence of any monitoring, the purposes for which personal data are to be processed and any other information necessary to guarantee fair processing.
In terms of new, data-driven technologies, the need for transparency becomes more evident since they enable the collection and further processing of huge amounts of personal data in a covert way. Take, for example, Scenario 3 – are employees adequately informed about how their data is shared and whether it gets deleted if they change jobs? In scenario 2, the transparency of the tool that calculates teachers’ scores is critical. But how do teachers know in what way their score has been defined? Oftentimes, the results are based on complex algorithms that are, to a large extend, unexplainable to ordinary people.
In spite of the fact that robotisation and datafication have created a very turbulent situation for modern workers, the principles of data protection remain the same. Adapting new technologies and the uses of data to the rigid rules of data protection law is not easy, but it is indispensable. On the one hand, this is because we value privacy in the workplace just as much as we value privacy in our homes. On the other hand, protection of data and individual privacy is instrumental to many other values such as personal liberty and dignity. If we want to make sure that these values are maintained in the workplace, protection of data most certainly plays a big role.