Toronto Public Tech Workshop 2023

Schedule

Fri Jun 09 2023 at 09:00 am to 05:00 pm

Location

Campbell Conference Facility | Toronto, ON

Advertisement
Exploring new research on the use of technology for public purposes.
About this Event

The Schwartz Reisman Institute for Technology and Society and the Munk School of Global Affairs & Public Policy at the University of Toronto are pleased to host the Toronto Public Tech Workshop, with researchers from a wide range of disciplines presenting new work that explores the use of technology for public purposes.

As technology becomes an integral part of our lives, its impact on society is undeniable. From healthcare to education, finance to transportation, technological innovations have transformed the way we live and work. However, the rapid pace of this innovation also raises novel concerns about privacy, security, and equity. There is a pressing need to explore and propose solutions to these challenges through research, policy, regulation, partnerships, and collaborations across various academic disciplines and stakeholders.

This workshop aims to address these challenges and offer new insights and solutions by bringing together diverse perspectives and expertise from a wide range of backgrounds. Presenters will share and discuss ideas on how to leverage new and existing technologies for public purposes, integrate policy and governance considerations, and build successful partnerships that engage with democratic institutions and public values.


Speakers:

Peter Loewen, Munk School of Global Affairs & Public Policy, University of Toronto; associate director, Schwartz Reisman Institute for Technology and Society

Somayeh Amini and Shveta Bhasker, Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto

Onur Bakiner, Political Science Department, Seattle University

LK Bertram, Department of History, University of Toronto

Shion Guha, Faculty of Information, University of Toronto

Kelly McConvey, Faculty of Information, University of Toronto

Lynette Ong, Munk School of Global Affairs, University of Toronto

Yan Shvartzshnaider, Lassonde School of Engineering, York University


Schedule:


8:30 AM | Registration and continental breakfast


9:00 AM | Opening remarks, Peter Loewen (Munk School of Global Affairs, University of Toronto)


9:10 AM | Shion Guha (Faculty of Information, University of Toronto), “Rethinking ‘risk’ in algorithmic systems through a computational narrative analysis of casenotes in child-welfare”

Risk assessment algorithms are being adopted by public sector agencies to make high-stakes decisions about human lives. Algorithms model “risk” based on individual client characteristics to identify clients most in need. However, this understanding of risk is primarily based on easily quantifiable risk factors that present an incomplete and biased perspective of clients. We conducted computational narrative analysis of child-welfare casenotes and draw attention to deeper systemic risk factors that are hard to quantify but directly impact families and street-level decision-making. We found that beyond individual risk factors, the system itself poses a significant amount of risk where parents are over-surveilled by caseworkers and lack agency in decision-making. We also problematize the notion of risk as a static construct by highlighting the temporality and mediating effects of different risk, protective, systemic, and procedural factors. Finally, we draw caution against using casenotes in NLP-based systems by unpacking their limitations and biases embedded within them.


10:10 AM | Lynette Ong (Munk School of Global Affairs, University of Toronto), “Authoritarian statecraft in the digital age: Online public opinion management in China”

The digital age has afforded autocrats new technologies of control, allowing it to co-opt, pre-empt and repress dissent. But, in what ways has it altered the way autocratic states conduct their statecraft and reconfigured the contours of state power? In this paper, we address these important research questions by examining how the Chinese state manages online expression of public opinions. Public opinions are a double-edged sword in autocratic setting. While they allow the rulers to gauge public sentiments and become more responsive to citizens’ demands, they can also spiral out of control and destabilize regimes. Thus, the management of online public opinions provides a critical window into understanding how the state conducts its statecraft in the digital age. Based on an analysis of more than 3,000 public procurement documents, we found that the Chinese state has outsourced various functions of public opinion management to private and state-owned corporations. These companies provide technical expertise that allow the state to harness big data and artificial intelligence to manage the expression of public opinions online. In-depth analysis of the for-profit firms to which the services have been outsourced and their service functions further reveals the nature of state-business relations and social control in China. This paper draws broader implications for the new performance of statecraft in the digital age, one that is based on state-business collaboration in autocratic China.


11:10 AM | Onur Bakiner (Political Science Department, Seattle University), “Pluralistic sociotechnical imaginaries in artificial intelligence law: The case of the European Union’s AI regulation”

This paper asks how lawmakers and other stakeholders envision the potential benefits and challenges arising from artificial intelligence (AI). A close reading of the European Union’s AI Regulation, a bill proposed by the European Commission in April 2021, and of 302 response papers submitted by NGOs, businesses and business associations, trade unions, academics, public authorities, and EU citizens, shows that pluralistic sociotechnical imaginaries contest: (1) the essential characteristics of technology as they relate to social and political problems, and law; (2) whether, how and how much law can enable, direct or constrain scientific and technological developments; and (3) the degree to which law is or should intervene into scientific and technological controversies. The feedback from stakeholders reveals major disagreements with the lawmakers in terms of how the relevant characteristics of AI should influence legal regulation, what the desired law should look like, and whether and how the law should intervene into expert debates in AI. What is more, different types of stakeholders diverge considerably in what they problematize and how they do so.


12:00 PM | Lunch


1:00 PM | LK Bertram (Department of History, University of Toronto), “Instascholars: Making good data go viral in the disinformation age”

How do we make accurate data go viral? Outside of my work as an associate professor at the University of Toronto, I am also an anonymous “instaprof” who runs a large-scale open history class on Instagram. My work is driven by this question and is the focus of a new SSHRC-funded project on high-yield knowledge mobilization strategies for video-based social media algorithms. My paper offers an overview of some of the digital and algorithmic literacy that scholars need to produce high-engagement or “viral” content, endemic issues with bias, censorship, and safety on video-based social media platforms, and the opportunities for university communities to create new, steady streams of accessible, accurate content for these big digital publics.

Amid the early rise of the COVID-19 pandemic the World Health Organization argued that it was also facing a twin “infodemic,” or the widespread public distribution of “false or misleading information in digital environments.” While social media platforms have shouldered much of the blame for the infodemic, the WHO cautions us that the success of both misinformation and disinformation campaigns have only been made possible by a corresponding vacuum of quality data online. Indeed, most scientists and scholars largely avoid social media platforms. Though some have developed a presence on text-based platforms like Twitter, very few circulate research on the biggest video-based platforms like TikTok and Instagram, in spite of their intense popularity. This absence is problematic. A 2021 study revealed that 86% of North Americans turn to video-based content on social media as a news source. The collective academic avoidance of these massive audiences and their unchecked, largely unchallenged growth have made some of the biggest digital publics in the world easy prey for misinformation and disinformation campaigns with troubling agendas, from anti-transgender legislation to curriculum bans on topics like slavery.

Some of the scholarly avoidance of video-based platforms reflects the disproportionate risks of harassment and violence faced by female, queer, and BIPOC scholars who speak out on social media. In addition to being an unwelcoming, potentially hostile space, many scholars in the humanities and social sciences also often cannot afford the time to build larger-scale public outreach campaigns. Those who do still often do so as side projects, as I had, often receiving less or no external support or recognition in academic circles. As Simone Lässig explains, monographs remain the “gold standard” for many humanities and social science scholars, while digital experimentation, content, and mobilization continues to play a far more “subordinate role” in how historians prioritize outputs. Missing, Noiret argues, are a serious new series of conversations about how massive technological shifts require historians to also reconsider our responsibilities and relationships to the digital public.

Rather than simply providing an overview of the problem, this paper also offers attendees a discussion of some of the future possibilities and directions that can support stronger public access to academic research through video-based social medial platforms. It describes the benefits of stronger algorithmic literacy campaigns for academics and ways to prioritize and defend equity, safety, and sustainability in an unequal digital landscape. It closes with a step-by-step introduction to the five qualities of high-engagement (viral) content for attendees who are interested in building their own knowledge mobilization campaigns for TikTok and Instagram.


2:00 PM | Kelly McConvey (Faculty of Information, University of Toronto), “A human-centered review of algorithms in decision-making in higher education”

The use of algorithms for decision-making in higher education is steadily growing, promising cost-savings to institutions and personalized service for students, but also raising ethical challenges around surveillance, fairness, and interpretation of data. To address the lack of systematic understanding of how these algorithms are currently designed, we reviewed an extensive corpus of papers proposing algorithms for decision-making in higher education. We categorized them based on input data, computational method, and target outcome, and then investigated the interrelations of these factors with the application of human-centered lenses: theoretical, participatory, or speculative design. We found that the models are trending towards deep learning, increased use of student personal data and protected attributes, with the target scope expanding towards automated decisions. However, despite the associated decrease in interpretability and explainability, current development predominantly fails to incorporate human-centered lenses. We discuss the challenges with these trends and advocate for a human-centered approach.


3:00 PM | Yan Shvartzshnaider (Lassonde School of Engineering, York University), “Privacy governance not included: Analysis of third parties in learning management systems”

The tumultuous COVID-19 pandemic significantly impacted higher education. The rapid adoption of online remote learning platforms resulted in increased surveillance practices and lack of transparency. While this transition enabled schools to remain open in a global pandemic, it exposed them to greater privacy challenges and threats. While recent efforts identified numerous specific educational privacy concerns involving major learning management systems, the challenges and uncertainties surrounding the use of third-party add-ons for learning management system (LMS) platforms remain relatively under-examined.

LMS add-ons—also known as plug-ins or learning tools interoperability (LTIs)—provide additional capabilities to existing LMS platforms. Many existing LMS platforms allow third-party add-ons to access the platform’s data to provide additional services. For example, the Turnitin plagiarism detection service has add-ons for all major LMS platforms, including Canvas, Moodle, and Blackboard. Importantly, the LMS platforms’ privacy policies often do not cover third-party add-ons and claim no responsibility for the privacy practices of these third parties.

A recent case before the Office of the Information and Privacy Commissioner (IPC) of Ontario, Canada showed that third-party add-ons inadvertently collected and shared student information (MC18-17 2022). Consequently, the IPC determined that the “[school board] does not have reasonable contractual and oversight measures in place to ensure the privacy and security of the personal information of its students.”

The IPC decision is indicative of the current status quo. When it comes to LMS add-ons, many universities follow informal practices that are not written down as an explicit policy. Usually, the burden falls on educational IT support staff to meet the needs of diverse stakeholder groups, such as educational technology practitioners, educators, and students. The lack of transparency behind many of these services adds to the challenge of understanding privacy and intellectual property implications with respect to third-party plug-ins in LMSs.

Motivated by these concerns and questions, we examine the use and governance of LMS add-ons at universities in the U.S. and Canada. Specifically, this paper explores third-party access to student data via add-ons, as well as the governance of third-party data sharing, via a multi-method design that draws on surveys, interviews, institutional policy analysis, and content analysis of LMS documentation. We document disparities in privacy practices and governance, and the nature of add-ons adoption processes. We also argue for greater transparency and oversight, drawing on exemplary practices identified via our empirical study.

In our study, we conduct interviews with data governance officers at 14 additional US universities, providing greater depth into the governance challenges associated with assessment and instructional LMS add-ons. A total of 25 professionals across these 14 universities discuss decision-making processes and frequent challenges, including coordination in adoption and evaluation of add-ons and value differences dividing preferences across different administrative units on their campus, typically: IT, the Center For Innovation In Teaching & Learning (CITL), Provost’s office staff, faculty governance, and legal counsels. These results provide insight into who within higher education is responsible for decision-making about LMS data and third-party data flows via add-ons, including where decision-making processes break down and what stakeholders’ interests may better align with student privacy preferences.


4:00 PM | Somayeh Amini and Shveta Bhasker (Institute of Health Policy, Management and Evaluation, University of Toronto), “Unlocking the power of EHRs: Harnessing unstructured data for machine learning-based outcome predictions”

Integrating electronic health records (EHRs) with machine learning (ML) models has become imperative in examining patient outcomes due to their vast amounts of clinical data. However, critical information regarding social and behavioral factors that affect health, such as mental health complexities, is often recorded in unstructured clinical notes, hindering its accessibility. This has resulted in an over-reliance on clinical data in current EHR-based research, leading to disparities in health outcomes. This study aims to evaluate the impact of incorporating patient-specific context from unstructured EHR data on the accuracy and stability of ML algorithms for predicting mortality. This study analyzed a sample of 1,058 patient records from the Medical Information Mart for Intensive Care III (MIMIC III) database to identify mental health disorders among adults admitted to intensive care units between 2001 and 2012. All clinical notes from each patient’s most recent ICU stay were evaluated to acquire a comprehensive understanding of their mental health issues based on unstructured data. We examined a variety of machine learning classifiers, including Logistic Regression, kernel-based Support Vector Machines, decision-tree-based Random Forest, XGBoost, ExtraTrees, and sample-based KNearest Neighbors. Results from the study confirmed the significance of incorporating patient-specific information into prediction models, leading to a notable improvement in the discriminatory power and robustness of the ML algorithms. In addition, the findings underline the importance of considering non-clinical factors related to a patient’s daily life and clinical characteristics when predicting patient outcomes. These results significantly improve ML in clinical decision support and patient outcome predictions.


5:00 PM | Cocktail reception


For questions or accessibility accommodations, please contact [email protected].


___________


About the Schwartz Reisman Institute for Technology and Society

The Schwartz Reisman Institute for Technology and Society is a research institute at the University of Toronto that explores the ethical and societal implications of technology. Our mission is to deepen knowledge of technologies, societies, and humanity by integrating research across traditional boundaries to build human-centred solutions.

Our research community seeks to rethink technology’s role in society, the needs of human communities, and the systems that govern them. We are investigating how best to align technology with human values and deploy it accordingly.

Across all our activities, SRI convenes world-class expertise and diverse perspectives from universities, government, industry, and beyond to develop new modes of thinking about powerful technologies and their role in what it means to be human in the 21st century. We are defining what’s possible, determining what’s at stake, and devising implementable solutions to make sure technologies like AI are effective, safe, fair, and beneficial—for everyone.

https://srinstitute.utoronto.ca/

Advertisement

Where is it happening?

Campbell Conference Facility, 1 Devonshire Place, Toronto, Canada

Event Location & Nearby Stays:

Tickets

CAD 20.00

Schwartz Reisman Institute

Host or Publisher Schwartz Reisman Institute

It's more fun with friends. Share with friends