A researcher in philosophy of technology, Alix Rübsaam investigates the societal and cultural impact of exponential technologies. She focuses on human activity in technological contexts such as Artificial Intelligence, information technologies, and digital environments and on deploying emerging technologies responsibly.
Alix’ research centres around two projects. The first explores responsible AI, algorithmic bias and exclusion. The aim is to map the impact of automated decision-making algorithms, to explicate how design decisions can lead to unwanted outcomes, and to empower leaders to build AI that is equitable, fair, and just. The second project investigates the digitalisation of information, and its effects on decision making. The goal is to investigate and challenge computational paradigms and to reposition leaders to informed decision making in a manner that befits 21st century dilemmas.
Alix is VP of Research, Expertise and Knowledge at Singularity. She oversees SU’s research efforts, body of knowledge, and community of experts. Prior to this, Alix was a PhD candidate at the University of Amsterdam and ASCA. She researched the collaboration between (technological) agents at the intersection of humans and computational systems. She has written and speaks about cyberpunk and science fiction literature, autonomous weapon systems, embodied robotics, and (responsible) AI.
This immersive simulation goes through the steps of designing and training an AI. Participants learn how decisions made in the design phase influence the outcomes of the algorithms they build. They learn how to identify and analyze the mechanisms that lead to unintended outcomes in AI, and how to advocate for and build more socially responsible algorithms. During the workshop, participants will make decisions about the training and designing of algorithms in a simulation of real world implementation of technologies across industries. Shortening the distance between design, implementation, and outcomes, increases understanding of how cultural backgrounds and assumptions become programmed into data-driven technologies. Then, participants will learn to identify opportunities for responsible AI; pinpoint potential pitfalls for algorithmic bias; and learn to assess the risks for unintended outcomes. During the simulation, they will also become familiar with the design, implementation, and use of automated decision-making algorithms and Machine Learning systems.
When it comes to automating decision making with AI, algorithmic bias can become hard-coded into data-driven technologies, despite our best intentions. Applications abound: from hiring to manufacturing optimization, from supply chain management to recommendation algorithms. But all AI applications are at risk of perpetuating, and augmenting blind spots into their models. Biases like this do not just mean that these AI’s are unfair or unjust, they can also affect your bottom line, if you don’t understand where the blind spots are.
This talk unpacks the workings of the design and decision making that goes into automated systems, how to identify and analyze the mechanisms in AI and how to advocate for and build more socially responsible algorithms. Additionally, in this session we will dive into the limitations and possibilities of leveraging data driven technologies, and how to be a leader in the emerging space. Participants will learn to identify different kinds of algorithmic biases; will be able to identify opportunities for responsible AI; and will learn to assess an automated decision making system on its risk for unintended consequences.
This immersive and hands-on simulation exercise goes through the steps of designing and training an AI. Participants learn how decisions made in the design phase influence the workings and outcomes of the algorithms built during the simulation. They learn how to identify and analyze the mechanisms that lead to unwanted and unintended outcomes in AI, as well as how to advocate for and build more socially responsible algorithms. No formal training needed to participate.
During the workshop, participants will make decisions about the training and designing of algorithms in a simulation of real world implementation of technologies across industries. Shortening the distance between design, implementation, and outcomes, increases understanding of how our cultural background and assumptions become programmed into data-driven technologies.
Then, participants will learn to identify opportunities for responsible AI; pinpoint potential pitfalls for algorithmic bias; and learn to assess the risks for unintended outcomes. During the simulation, they will also become familiar with the design, implementation, and use of automated decision-making algorithms and Machine Learning systems
AI as an existential risk for humanity has become a mainstay in news headlines. While some predict that the rise of Artificial Intelligence will mean the “end of humankind”, others see no future without algorithms and data-driven systems. What sense can we make from these predictions and warnings? This talk unpacks the effects of thinking about our brain as a computer and what the impact of this thinking has on how we leverage a technology like AI.
Our current ‘computational’ way of thinking has major impact on our culture and our sense of self. At a macro level, software has come to function as a metaphor for humanity as a whole. We will investigate what it means to be human in the digital age, and challenge our understanding of Artificial Intelligence as a threat to humanity.
This module on critical thinking and decision making in the age of exponential technologies consists of three parts.
The first part on information is designed to help participants navigate an environment with abundant information. Converging exponential technologies have radically changed our relation to information. As available insights continue to increase and data-processing technologies become ever cheaper, we need a new skill set to make sense of this abundance of information. We explore the shift in how we build knowledge, and the skills we need to address this shift.
Part two dives into the computational perspective with which we engage with our surroundings. As computers and information-processing technologies continue to influence ever more aspects of our life, it is difficult to think outside of the data-driven lens through which we understand the world around us. We explore the entanglement of our culture and our digital technologies, discuss the risks and limitations of computational thinking, and offer skills to disrupt these risks.
Weaving these threads together in part three, we now find ourselves in a time and space where each of us have the tools and the information available to us to be critical decision makers. Participants are empowered to utilize the tools they now have available to tackle decisions they face individually, as a leader, and as part of a company, while avoiding the pitfalls of computational reasoning.
Three separate parts can be delivered as standalone sessions as well.