- Franziska Böhler of FeedbackFruits says compliance must be built into EdTech tools from the start to ensure trust and protect learners.
- The EU AI Act creates both strict rules and new opportunities, pushing developers, regulators, and educators to collaborate for ethical innovation.
Building trust in AI-driven education technologies
Franziska Böhler, Head of Legal at FeedbackFruits, has emerged as one of the leading voices in bridging AI regulation and education technology. Speaking at this year’s conference, she presented “From Regulation to Innovation: What the EU AI Act Means for EdTech”, giving the research and education community a clear roadmap to understand the legal and ethical shifts ahead. Based in Amsterdam, Böhler has spent five years at FeedbackFruits, where she focuses on building trust between the company and public institutions.
Also Read: ‘EU AI Act’ takes effect in August: A landmark regulation for AI
Also Read: The EU AI ACT: How will it change the AI landscape?
Balancing compliance demands with space for innovation
The EU AI Act sets strict rules for high-risk uses, and this includes education. Böhler said that EdTech developers should not see compliance only as ticking boxes. She explained that it must be built into the product from the start. At FeedbackFruits, the team designs operations with privacy and trust at the centre. They try to create systems that do not need to be fixed later. Böhler noted that this method makes compliance part of the process itself. According to her, this reduces risk and builds stronger confidence in the technology. BTW Media sources have revealed that this approach is becoming more common among responsible EdTech companies.
Embedding ethical principles without harming learning outcomes
Böhler stressed that ethics and learning outcomes cannot be separated. She argued that real learning results only come from tools that people can trust. According to our research, developers that do not embed ethics risk undermining the value of their products. At FeedbackFruits, the focus is on building systems that protect learners from the start. Böhler said this includes designing for privacy, fairness, and transparency. She added that if technology is not trustworthy, then the outcomes it produces cannot reflect public values. Industry pundits say this message is vital as AI tools are tested in more classrooms.
Collaboration across developers, regulators, and institutions
Böhler said that responsibility for safe AI in education must be shared. Regulators, developers, educators, and institutions all play different roles. She warned against passing accountability from one group to another. According to sources with knowledge of the issue, this habit slows progress and weakens trust. Böhler explained that every stakeholder must accept their share of responsibility. She noted that collaboration builds stronger systems and avoids gaps in oversight. Industry pundits say that this approach can also help create better standards for the whole sector.
Setting higher expectations for the future of EdTech
Böhler explained that the EU AI Act is not only about following the law. She said it also opens a chance for the industry to innovate in a responsible way. According to our research, this means putting ethics at the centre and keeping public values in mind. Böhler told BTW Media that private companies and public institutions must work together closely. She said this partnership can make AI tools both effective and trusted. BTW Media sources have revealed that several EdTech firms are already starting to prepare for these higher expectations.