Abstract

Conceptual models are a tool for information system designers to represent abstract ideas in a formal system, as well as a cognitive tool for reasoning about the world. In this sense, conceptual models both represent the world around us as well as help us interact meaningfully with that world. Conceptual models are also ubiquitous in our everyday lives - from a federal system of postal codes that deliver our mail accurately to a list of “stop words” that govern a natural language processing application in our word processors. In this workshop we seek to convene researchers that are engaged in critically examining and challenging the implementation of conceptual models used in socio-technical systems. The workshop will be structurally organized around presenting works in progress and a set of “challenges” posed to workshop participants to collaboratively develop working papers that can be used in future research.

Workshop Proposal

For the last three years, we have held a series of workshops on conceptual models in which we have argued that these tools are fundamental to the construction, maintenance, and use of digital infrastructures that mutually constitute people and technology (the sociotechnical) (Thomer 2019; Weber et al, 2019, 2020a and 2020b). Throughout these workshops we have engaged over 200 attendees in discussion, presentations, and keynote lectures on how the formal and informal methods of abstract representation govern the ways systems work, as well as the information objects they process and transmit. We continue to see that social, political, and economic conditions require taking these conceptual models seriously – both in the way that these models structure retrieval and use of information objects, but also the logic of how complex ideas are made legible to a formal system that is often rigid and unforgiving in its need for specificity.

Our proposed fifth workshop in this series will focus on issues of fairness and accountability as manifest in conceptual models. Central to the argument of our workshop series is the fact that sociotechnical systems - which represent the mutual constitution of people and technologies acting in particular contextual environments - are fundamentally shaped by the conceptual models that they enact. This notion of the sociotechnical applies to a wide range of applications, from the use of a conceptual model behind a facial recognition system that perceives of gender (Levy, 2018), to the seemingly mundane, such as how a topic is identified within a corpus of text (Caliskan & Lewis, 2020). But, the consequences of these systems are not mundane - they impact people’s lives and people’s experiences with and through information objects (e.g. Brown et al, 2020; Perkowitz, 2021). Much of the research that has appealed to concepts like transparency, trust, and fairness in emerging technologies have focused on the downstream effects of technologies use and the harms they perpetuate (e.g. LePeau et al, 2016). However, many of these studies fail to tackle the upstream impact of how a concept is formally conceived and modeled, and how the design of a conceptual model becomes encoded in potentially harmful technologies.

Through this workshop we hope to convene researchers in information science interested in critical, careful examination of how we model information in sociotechnical systems. Areas of interest include, but are not limited to, how we perceive of demographics and attributes of people, how user behavior is interpreted, and the effects of using a concept developed in one setting for applications in another (e.g. how a credit score is used in evaluating a candidate for a job e.g. Roselli et al, 2019). Through this workshop’s focus on critical approaches to conceptual models we hope to foster renewed attention to the ways these formal and informal tools are used in sociotechnical systems development, and with what social implications. We firmly believe that information science has an important role to play in first surfacing and describing conceptual models in practice, and also improving the evaluation of these models as they are used throughout society.

Format

The proposed workshop will follow a full-day mini-conference format where participants will share working papers on the topic of conceptual models in sociotechnical systems, and engage in a series of generative “challenges” in which information science research techniques might be used to study emergent topics of relevance to conceptual modelling. The workshop will also include a keynote talk from an expert in the field of sociotechnical systems of accountability.

Proposed Agenda

The full-day agenda will include an introductory keynote and presentations of working papers on the topic of conceptual models. Working paper submissions will be solicited prior to the workshop and reviewed by at least two members of the organizing committee. We aim to invite 6-8 papers for presentation. Papers will be non-archival and the discussion of the paper will be aimed at helping authors to clarify their arguments and prepare their work for future publication. In the afternoon, the workshop’s attention will shift from existing work to prospective work that is necessary to make progress on research in the conceptual modelling domain. During the workshop at ASIS&T attendees will be offered the chance to join a group of peers focusing on a topic of relevance to sociotechnical systems design and implementation. The working groups will spend time developing ideas, identifying datasets, and structuring an argument that can be generative to future scholarship at ASIS&T and beyond.

Call for contributions - Working Papers:

Working papers might address (but are not limited to) the following topics:

  • Conceptual Models in Practice - What are techniques used to develop and audit conceptual models used in specific settings for particular tasks?;
  • Transparency - How do practitioners and researchers identify and interpret conceptual models that underpin classification, categorization, or information retrieval systems?; and,
  • Fairness and Accountability - What are the best ways to evaluate the implementation of a conceptual model? How can and should we perceive of the ways that these models enact accountability or fairness for users and subjects?

Call for contributions - Conceptual Modeling Challenges

In addition to working papers we will also solicit participation in the form of a set of conceptual modelling challenges that address (but are not limited to): Representing Aspects of the Human and Environmental Interface in Conceptual Models; Privacy and Awareness of How Conceptual Models Encode Demographic Data; Categorization and Conceptualization in Language Models and Text Corpora Analysis; Collective, Distributive, and Individual Harms that Result from Conceptual Models Used in Sociotechnical Systems; and, Information Representation and Retrieval of Complex Digital Artifacts (e.g. Collection and Item-Level Descriptions)

Organizers

  • Katrina Fenlon is an assistant professor in the College of Information Studies at the University of Maryland. Her research focuses on the changing shape of the scholarly knowledge ecosystem.
  • Peter Organisciak is an assistant professor of Library and Information Science at the University of Denver. His research focuses on content-based methods for studying large-scale digital libraries.
  • Andrea Thomer is an assistant professor at the University of Michigan School of Information. She conducts research in the areas of scientific data practices, data curation, and computer-supported cooperative work.
  • Nic Weber is an assistant professor in the Information School at the University of Washington. His research addresses the development and long-term maintenance of data infrastructures.

References

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

Caliskan, A., & Lewis, M. (2020). Social biases in word embeddings and their relation to human cognition.

Leavy, S. (2018, May). Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning. In Proceedings of the 1st international workshop on gender equality in software engineering (pp. 14-16).

LePeau, L. A., Morgan, D. L., Zimmerman, H. B., Snipes, J. T., & Marcotte, B. A. (2016). Connecting to get things done: A conceptual model of the process used to respond to bias incidents. Journal of Diversity in Higher Education, 9(2), 113.

Perkowitz, S. (2021). The Bias in the Machine: Facial Recognition Technology and Racial Disparities. MIT Case Studies in Social and Ethical Responsibilities of Computing.

Roselli, D., Matthews, J., & Talagala, N. (2019, May). Managing bias in AI. In Companion Proceedings of The 2019 World Wide Web Conference (pp. 539-544).

Weber, N., Fenlon, K., Organisciak, P., & Thomer, A. K. (2019, June). Conceptual models in digital libraries, archives, and museums. In Proceedings of the 18th Joint Conference on Digital Libraries (pp. 457-458).

Weber, N., Fenlon, K., Organisciak, P., Thomer, A. K. & Wickett, K. (2020). Conceptual models in digital libraries, archives, and museums (SIG-CM) 2020. In Proceedings of the 19th Joint Conference on Digital Libraries.

Weber, N., Fenlon, K., Organisciak, P., Thomer, A. K. (2020). Conceptual models of the Sociotechnical. In Proceedings of the Association for Information Science and Technology, 57(1)