Sign in

Data Poisoning Attacks on Crowdsourced-Based Machine Learning

Photo by Bill Oxford on Unsplash

Recently a paper with the title “Data Poisoning Attacks and Defenses to Crowdsourcing Systemswas published on arXiv in which the authors analyze data poisoning attacks on crowdsourced data labeling. Here is a summary.

For classification tasks such as image classification large labeled datasets of high quality are required in order to build machine learning models that achieve state-of-the-art performance. However, creating these datasets is often challenging as in many situations only unlabeled data is available and labeling millions or even billions of unlabeled items would require a lot of manual effort with thousands of people being involved.

Labeling Datasets via Crowdsourcing

To still get the labels for the items of an unlabeled dataset one option is crowdsourcing. Here, the labeling of unlabeled data is done by the crowd that might consist of thousands or ten of thousands of individuals, also known as workers. Each worker gets a small subset of the items of the big dataset and assigns the correct labels to these items. When a worker has labeled all items, the worker sends the results back to the server which aggregates the results of all workers.

The results are usually noisy and unreliable as each worker might make mistakes and because workers might also be biased. Therefore, the same item is usually sent to different workers and the server computes the final label for an item from all answers which it received for this item. To compute the correct label so-called truth discovery methods are often used. These methods compute a weighted aggregation of all results. The weight typically depends on the quality of the results of a worker and the worker‘s reliability. The higher the reliability of a worker, the more impact does the worker’s result have on the final label. The weight increases the more reliable a worker is. Various approaches exist to estimate a worker’s reliability. For instance, a worker might be rated as reliable if the results returned by the worker do not deviate too much from the results of the majority.

Most aggregation methods have been developed to make the labeling process more robust to noise introduced by mistakes and bias. However, many of them don’t consider the fact that malicious workers might exist which could assign wrong labels to the items intentionally with the goal to degrade the performance of a machine learning model.

Impact of Poisoning Attacks on Crowdsourced Labeling

The machine learning community has shown that these so-called poisoning attacks can have a significant impact on crowdsourced based solutions for labeling datasets. Yet comprehensive research does not exist. For instance, most existing research focuses on categorical features and does not study the effect of poisoning attacks on other features types.

In this research paper the authors want to close this gap by analyzing the effect of poisoning attacks on two state-of-the-art truth discovery methods for continuous features. The first method is the “conflict resolution on heterogeneous data” and the second one is the “gaussian truth model”. For their study the authors use a synthetic dataset and two real-world benchmark datasets. One of the two real-world datasets is the Emotion dataset. Here, each worker gets some documents and for each document the worker needs to assign a sentiment value between -100 and 100. The other read-world dataset is the Weather dataset which contains temperature forecast information.

In their experiments two scenarios are considered. In the first scenario the adversary has full knowledge, i.e. the adversary knows the aggregation method that is used and all values which are assigned to the items and transmitted to the server by normal workers. The authors argue that even though this might look like a strong assumption, this scenario is not uncommon in practice as all data could be public. For instance, if the task is to collect local weather data, the adversary could get all weather information from weather services. In the second scenario the adversary has only partial knowledge. The adversary still knows the aggregation method but knows the values of just a subset of all normal workers.

The authors successfully show that data poisoning attacks can also be a problem for crowdsource based labeling when continuous features are used. To demonstrate the effectiveness of their attack, they compare it with a random attack where the worker assigns random values to the items and a maximum attack where simply the maximum allowed value is assigned to each item. For instance, it is shown that an adversary that controls 10% of the workers can increase the estimation error to almost 94%.

Defense

Finally, two defenses are proposed which help to significantly reduce the effect of data poisoning attacks. In the “median-of-weighted-average” defense the server partitions the workers of an item into different groups. Then, the weighted average of each group is computed and after that the median of all groups is selected as the aggregated value of the item.

In the “maximum influence of estimation” defense the assumption is that the server knows the goal of the adversary, how many malicious workers exist and when the crowdsourcing system is under attack. With this knowledge the defense identifies workers that are potentially malicious and removes their result.

The authors show that the proposed defenses are effective under both the full-knowledge and partial-knowledge settings. On the other hand they also show that the defenses are still vulnerable if the number of malicious workers grows significantly. For instance, even with the median-of-weighted-average defense the average estimation error is almost 15%, if 30% of the workers are malicious.

References

Fang, M., Sun, M., Li, Q., Gong, N. Z., Tian, J., & Liu, J. (2021). Data Poisoning Attacks and Defenses to Crowdsourcing Systems. arXiv preprint arXiv:2102.09171.

IT Security Architect, Software Engineer and interested in machine learning vulnerabilities and how to use machine learning to improve security.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store