The data extracted from educational research pose ethical problems as well as data literacy challenges both for the researchers (making data readable and usable) and the end users (reading and using data). Within an approach of Responsible Research and Innovation, data handling and visualization is crucial at the time of making science not only open, but also accessible by other researchers and the same end users, normally unaware of being engaged in such processes. A subtle aspect of this situation relates the poor appropriation and utilisation of scientific research products in the pedagogical field by potential users (stakeholders in the education and training system). In fact, approach to a more fair science promoted by the European Commission has been developing internationally for about ten years, but in the field of TEL, in spite of the easiness of massive data collection, the difficulties for opening up research have led to intrinsic, ongoing debates. As a matter of fact, the specific case of Open Data (a relevant and innovative part of the Open Science scheme) should be discussed as a resource for TEL research. Data produced in this field is not limited to the more popular educational data-mining procedures, frequently adopted in MOOCs, but encompasses a variety of research methodologies that yield diversified types of data, which in time implicates diversified ways of treating, sharing, and using them. Learning some of these principles and tools, and discussing them in the context of our current practices as researchers in the field of TEL, could be a starting point toward unraveling problems and coordinating efforts for further action. Therefore, the authors of this workshop presentation would like to discuss two case studies of TEL research in Higher Education, dealing with diversified research problems, methods and hence data collected. To this regard, the authors engaged along the research in a reflective practice connected to opening the data collected. The driving questions for this reflective practice were: which data can we actually share? To which regard the data shared could put the final students at risk and how could they be informed on their data? How could we promote other researchers’ awareness on the usability of our data? The first case regarded a study on the effectiveness MOOCs as part of a pedagogical strategy aimed at supporting undergraduate students in large-size lectures. The case related an experimental activity within a course of Physics for Sciences and Engineering pathways and it consisted in a blended learning activity. This last indeed adopted a parallel MOOC delivered through POK (PoliMi Open Knowledge, http://www.pok.polimi.it), the Politecnico di Milano’s MOOC portal, integrated with face-to-face activities that included intensive technology enhanced learning, like feed-back based on clickers. The several elements of the approach (the tutors’ pedagogy, the adoption of clickers, the diversity amongst learning groups) yielded quantitative data that was analyzed through traditional descriptive and inferential statistics. The findings highlighted the importance of MOOCs for the preliminary undergraduate level across small and large size lectures, against other factors adopted within the learning design. Moreover, it was found that, with this integrated design including MOOCs, the students in a large size lecture demonstrated similar or even better performance than students in a small size group. In the end, the authors recognized the limitations of this study, performed under a non-randomized situation, leading to bias. Sharing data would trigger the possibility of replications that would help to compare the information. However, while the data collected and opened, together with the R code generated for the analysis had an easy structure, the experimental conditions were rather unique and connected with a particular institutional context and a particular team of teachers. Therefore, the authors reflected on the additional information that necessarily has to accompany any process of data sharing in educational research. The second case regarded another undergraduate course, namely the “eLearning Design”, ran within the context of the degree in Human-Computer Interaction Interfaces of the University of Trento. In this case, a team of three teachers developed a blended activity which had a significant component of collaborative and project-based learning. Beyond designing and implementing own eLearning courses, the students had to peer-evaluate others’ activities. Moreover, they had to make a live presentation of their work, and individually collect their reflections on the learning experience. The final products, namely, the essays on the learning experience, where rich multimodal texts dealing with students’ emotions, critical situations and learning episodes as well as acknowledged skills achieved along the collaborative and project-based activity. This information was analyzed in order to dig into the dimensions that characterized the quality of the learning experience, against the institutional final surveys on students’ satisfaction. Indeed, the authors noticed that while the surveys registered low scores, the rich qualitative information yield a completely different panorama: while the students acknowledged the complexity and time consuming experience of collaborative learning, they revealed in several ways the importance of its impact on their motivation, skills and knowledge achieved. In this case, the data collected was qualitative: a dataset with 1339 text’s units was generated. This dataset contained elements that would led other researchers to understand the categories applied to the corpus for its analysis, as well as the sequential procedure of codification leading to more interpretative levels. The authors reflected on the forms of data-anonymization, and discussed on the contingencies for sharing such data, since the dataset could be the base for several research studies. Putting the two experiences, one quantitative and the other qualitative, it was possible to see how traditional research methods require combinations with emergent research reflections in order to opening up data. In fact, while in the quantitative research case it was important to add contextual information that could modulate the variables’ performance, in the qualitative case the processes of synthesis and quantitative representations would have led to forms of protection of privacy issues.

Opening up TEL Research: A critical perspective from two case studies

Juliana Elisa Raffaghelli;Matteo Bozzi;Maurizio Zani
2018-01-01

Abstract

The data extracted from educational research pose ethical problems as well as data literacy challenges both for the researchers (making data readable and usable) and the end users (reading and using data). Within an approach of Responsible Research and Innovation, data handling and visualization is crucial at the time of making science not only open, but also accessible by other researchers and the same end users, normally unaware of being engaged in such processes. A subtle aspect of this situation relates the poor appropriation and utilisation of scientific research products in the pedagogical field by potential users (stakeholders in the education and training system). In fact, approach to a more fair science promoted by the European Commission has been developing internationally for about ten years, but in the field of TEL, in spite of the easiness of massive data collection, the difficulties for opening up research have led to intrinsic, ongoing debates. As a matter of fact, the specific case of Open Data (a relevant and innovative part of the Open Science scheme) should be discussed as a resource for TEL research. Data produced in this field is not limited to the more popular educational data-mining procedures, frequently adopted in MOOCs, but encompasses a variety of research methodologies that yield diversified types of data, which in time implicates diversified ways of treating, sharing, and using them. Learning some of these principles and tools, and discussing them in the context of our current practices as researchers in the field of TEL, could be a starting point toward unraveling problems and coordinating efforts for further action. Therefore, the authors of this workshop presentation would like to discuss two case studies of TEL research in Higher Education, dealing with diversified research problems, methods and hence data collected. To this regard, the authors engaged along the research in a reflective practice connected to opening the data collected. The driving questions for this reflective practice were: which data can we actually share? To which regard the data shared could put the final students at risk and how could they be informed on their data? How could we promote other researchers’ awareness on the usability of our data? The first case regarded a study on the effectiveness MOOCs as part of a pedagogical strategy aimed at supporting undergraduate students in large-size lectures. The case related an experimental activity within a course of Physics for Sciences and Engineering pathways and it consisted in a blended learning activity. This last indeed adopted a parallel MOOC delivered through POK (PoliMi Open Knowledge, http://www.pok.polimi.it), the Politecnico di Milano’s MOOC portal, integrated with face-to-face activities that included intensive technology enhanced learning, like feed-back based on clickers. The several elements of the approach (the tutors’ pedagogy, the adoption of clickers, the diversity amongst learning groups) yielded quantitative data that was analyzed through traditional descriptive and inferential statistics. The findings highlighted the importance of MOOCs for the preliminary undergraduate level across small and large size lectures, against other factors adopted within the learning design. Moreover, it was found that, with this integrated design including MOOCs, the students in a large size lecture demonstrated similar or even better performance than students in a small size group. In the end, the authors recognized the limitations of this study, performed under a non-randomized situation, leading to bias. Sharing data would trigger the possibility of replications that would help to compare the information. However, while the data collected and opened, together with the R code generated for the analysis had an easy structure, the experimental conditions were rather unique and connected with a particular institutional context and a particular team of teachers. Therefore, the authors reflected on the additional information that necessarily has to accompany any process of data sharing in educational research. The second case regarded another undergraduate course, namely the “eLearning Design”, ran within the context of the degree in Human-Computer Interaction Interfaces of the University of Trento. In this case, a team of three teachers developed a blended activity which had a significant component of collaborative and project-based learning. Beyond designing and implementing own eLearning courses, the students had to peer-evaluate others’ activities. Moreover, they had to make a live presentation of their work, and individually collect their reflections on the learning experience. The final products, namely, the essays on the learning experience, where rich multimodal texts dealing with students’ emotions, critical situations and learning episodes as well as acknowledged skills achieved along the collaborative and project-based activity. This information was analyzed in order to dig into the dimensions that characterized the quality of the learning experience, against the institutional final surveys on students’ satisfaction. Indeed, the authors noticed that while the surveys registered low scores, the rich qualitative information yield a completely different panorama: while the students acknowledged the complexity and time consuming experience of collaborative learning, they revealed in several ways the importance of its impact on their motivation, skills and knowledge achieved. In this case, the data collected was qualitative: a dataset with 1339 text’s units was generated. This dataset contained elements that would led other researchers to understand the categories applied to the corpus for its analysis, as well as the sequential procedure of codification leading to more interpretative levels. The authors reflected on the forms of data-anonymization, and discussed on the contingencies for sharing such data, since the dataset could be the base for several research studies. Putting the two experiences, one quantitative and the other qualitative, it was possible to see how traditional research methods require combinations with emergent research reflections in order to opening up data. In fact, while in the quantitative research case it was important to add contextual information that could modulate the variables’ performance, in the qualitative case the processes of synthesis and quantitative representations would have led to forms of protection of privacy issues.
2018
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1060881
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact