||Crowdsourcing (CS) has emerged as a very promising approach for obtaining services, feedback or data from a large number of people connected through the Internet, in a short time and at a reasonable cost. CS has been used in a large range of contexts, thus proving its versatility. However, being generated by anonymous actors, crowd-sourced services and data are of unguaranteed quality. Indeed, Their correctness depends on two factors: (i) the reliability and trustworthiness of the contributors and (ii) the influence of their subjectivity on their contributions. Therefore the quality of these services and data must be verified. This verification usually results in additional time and cost. In this context, the goal of this PhD thesis is to provide the theoretical and practical elements needed to address the following question: how to control the quality of the crowd-sourced data, taking into account the specificity of each worker and each task, while minimizing budget and time overheads and being agnostic from complementary knowledge such as gold answers and trust information about workers (as in real life this knowledge is not available).