Digital Humanism in Complexity Science

When an AI system is deployed, it does not operate in a vacuum - it is integrated into organisational processes, and interacts with people and the environment. The AI system becomes part of a larger socio-technical system - in fact, a complex system. Complexity science research is essential to help us to understand the possible effects and negative side effects of such systems in a diversity of contexts.

Digitalization and the widespread use of AI systems has revolutionized our world – but while it offers unprecedented opportunities, it also raises serious challenges. The complex interaction between people, organisations and algorithms gives rise to many of these issues:  From loss of privacy, to algorithmic biases that harm and discriminate against already marginalized groups; from information asymmetry (where a few players hold most of our data, but we have very little insight into their operations in return), to polarization and the spread of fake news on social media.  

Digital Humanism aims to put people and the planet back at the center of the digital transformation.  However, attempts to assess and mitigate harms arising from the use of AI systems from a purely technical, or purely social perspective, often fall short.  Complexity science research, in combination with a collaborative, multi-disciplinary approach, is essential to help us to understand the possible effects and negative side effects of such systems in a diversity of contexts.  Therefore, leiwand.ai and the Complexity Science Hub Vienna have partnered in a consortium for developing a roadmap for digital humanism at the Complexity Science Hub (CSH).

For leiwand.ai it is especially important to bring its knowledge about trustworthiness in AI to the table and to emphasize algorithmic fairness and justice throughout the project.

Part of the project goal is to build a sustainable community of researchers and business stakeholders in complexity science, digital humanism, and computational social science. It is about a vital exchange about humanistic values in AI.

A first workshop with a select group of high-level experts with diverse backgrounds took place in December 2022. The participants discussed the future of data driven analysis with regard to the alignment with the principles of digital humanism, as well as open research questions for tackling algorithmic biases. The results of the workshop will be integrated into a White Paper to be published in 2023.

Consortium partner:

HOME - CSH

Image source: Ian Dooley - unsplash.com

Previous
Previous

Trustworthy AI in Practice

Next
Next

Online Hate Barometer