Between the 13th and 17th of May, ESR Fatma Doğan had the opportunity to attend the Summer School, an event that brought together researchers from a variety of fields to share insights, foster collaboration, and advance collective knowledge. It was an invigorating experience, to say the least.
One of the most rewarding aspects of the summer school was the opportunity for Fatma to present a part of her PhD research. Sharing her work with such a diverse and knowledgeable audience was exhilarating. The feedback received was invaluable, offering new perspectives and ideas that Fatma is eager to incorporate into her research. The summer school provided a fantastic platform for Fatma to network with other researchers. Engaging with peers from different disciplines opened her eyes to various methodologies and approaches that she hadn’t considered before. These interactions highlighted the importance of interdisciplinary collaboration in driving innovation and solving complex problems.
The event featured many esteemed scholars who gave insightful talks on a wide range of topics such as Nadya Purtova, Lilian Edwards, Michael Veale, Uli Sachs and Yong Lim. Just to give an example, Fatma shared some remarks from Woodrow Hartzog’s speech.
Prof. Hartzog delved into the complex risks of AI and the challenges of regulating this technology. He highlighted the difficulty of understanding AI’s full risks, noting the divide between techno-optimists and techno-doomers. Prof. Hartzog stressed the dangers of an unregulated information ecosystem and the necessity of proactive measures to prevent harmful advancements by less scrupulous actors. He pointed out the growing privacy risks posed by IoT devices like facial recognition doorbells and the increasing use of AI for micro-management in schools, workplaces, and universities. This creates an “AI micro-managing machine” that personalizes ads and perpetuates misinformation—a scenario he described as a “snake eating its own tail.”
Regarding regulation, Prof. Hartzog critiqued Biden’s executive order on AI, which focuses on transparency but often falls short. Transparency is insufficient without real power for individuals, and debiasing AI is challenging and doesn’t necessarily make systems less dangerous. Ethical guidelines and advisory boards, while well-intentioned, often lack authority, creating a false sense of progress. Emphasizing individual control over personal data is misleading when system designs don’t support genuine autonomy. Prof. Hartzog mentioned that governments must protect individuals regardless of their choices because ‘no technology is neutral’. Lawmakers should be involved in tech development to ensure alignment with societal values. He highlighted the importance of maintaining social trust and pointed out that meaningful AI development might reduce industry profits but is essential to avoid societal harm. Technologies requiring significant human exploitation might not be necessary.
Reflecting on historical lessons, Prof. Hartzog noted the decline in public trust in tech companies and the growing calls for stricter regulations, such as bans on facial recognition. Local efforts, like city councils banning facial recognition, demonstrate meaningful action once considered unimaginable. Finally, Prof. Hartzog emphasized that addressing bias in AI is crucial but just a starting point. Bias correction alone won’t eliminate the risk of AI being used oppressively. He compared AI regulation to speed limits—necessary for safety despite being imperfect. He also noted that surveillance-driven advertising contributes to misinformation, underscoring the need for comprehensive and proactive AI governance, ethical development, and privacy protection to ensure technology serves society responsibly.
This summer school was more than just an academic exercise; Fatma is looking forward to applying the insights gained to her ongoing research. Lastly, Fatma is thankful to the organizers and participants who made the Summer School a success.