Skip Navigation

Event Recap: When AI Governance Goes Grassroots

When

March 13, 2024
5:30 PM - 7:00 PM

Where

Ethica Coffee Roasters

Event Takeaways

AI Governance from the Ground Up: AI governance isn't exclusive to boardrooms and policy chambers. Community-level discussions hold power in shaping responsible and inclusive AI development.

AI's Hidden Value Chain: To govern AI effectively, look beyond its 'intelligence' to see the complex system of people, technologies, and ethical dilemmas involved in creating tools like ChatGPT.

Beyond Tech Hype: Question the 'move fast' mentality of Silicon Valley and focus on how AI systems tangibly affect communities, both in positive and potentially harmful ways.

Harnessing AI Critically: AI's impact on fields like healthcare and finance demands deep scrutiny of how algorithms can reinforce existing biases and inequities.

— 

A Toronto coffee shop becomes a microcosm of the urgent, existential debate around artificial intelligence.

"This is AI governance," announced technology ethics expert Blair Attard-Frost to a room of designers, researchers, and curious locals gathered at Ethica Coffee Roasters for Design Meets’ “Co-Creating AI Futures” event.

The proclamation hung heavy in the air – tech governance tends to be decided in boardrooms and policy chambers, not neighbourhood coffee shops. Yet, here, amidst half-finished lattes and scribbled notes, a productive discussion about how to build and use AI was taking shape.

Last year, artificial intelligence burst from the confines of academia with promises to streamline healthcare, personalize education, revolutionize work, and boost businesses’ bottom lines. ChatGPT hit the market as one of the most impressive products to come out of Silicon Valley since Google Search. Hence the reason it exploded to 100 million users in two months with minimal marketing, unpolished UX, and a buggy sign-up process.

But alongside AI’s promises, and ChatGPT’s seemingly infinite use cases, runs a deep concern: who gets to decide how AI is developed and implemented, and how do we ensure the technology benefits the many, not just the few?

These questions were at the heart of Wednesday’s gathering.

Claudia McKoy: Bridging the Gaps

The first speaker of the night, Claudia McKoy, a strategic engagement expert and Founder of UpSurgence, brought an immediate sense of urgency to the conversation. "If we're going to build bridges with AI, we need to understand where the gaps are,” she told the audience.

We learned about McKoy's work leading the City of Mississauga's Black Community Engagement project where she made concrete recommendations for increasing political representation and reforming policing practices in the region. This experience, she explained, became the springboard for her subsequent work with the Toronto and Peel Regional Police forces (2022-2023), where she navigated sensitive issues like race-based data collection and community relations in policing. Today, McKoy is extending her work with the Peel Regional Police to develop a framework for the responsible use of AI within the force.

Claudia’s voice carried the weight of real-world impact, and her solutions-oriented approach was refreshing. Rather than focus solely on the potential harms of AI, she challenged us with the question: can we design AI-driven policing systems that mitigate, rather than amplify, socio-ethnic disparities?

Blair Attard-Frost: Reimagining AI Governance

Blair Attard-Frost, an Information Studies Ph.D. candidate at the University of Toronto, shifted the discussion from case studies to the foundations of AI and the community's role in shaping its development.

They kicked things off with an insightful analysis of AI value chains, deconstructing the notion of AI as a neutral computational tool. They emphasized the vast network of human and non-human entities underpinning AI systems like ChatGPT. These include physical components such as undersea cables, data centers, server racks, and machine learning models, as well as human stakeholders ranging from tech CEOs and shareholders to knowledge workers and manual labourers in lithium mines. This host of actors is then enmeshed in ethical and metaphysical contexts related to justice, consciousness, and human nature.

This complexity, Blair explained, is precisely why traditional governance models are inadequate for AI. The sheer scale, diverse stakeholders, and opaque decision-making processes demand a rethinking of oversight. From there, they introduced the audience to "AI countergovernance," a concept that refers to community and worker-led efforts to challenge and reorient AI governance toward inclusivity, justice, and harm mitigation.

As a whole, Attard-Frost’s presentation functioned as a powerful call for collaborative oversight. "We need to actively oppose AI that harms our communities," they stressed, before highlighting that this very edition of DesignMeets was itself a prime example of AI countergovernance.

Kem-Laurin Lubin: AI and Algorithmic Bias

Kem-Laurin Lubin, a PhD Candidate in Computational Rhetoric with a background in design, brought a critical lens to the conversation. Her previous experience at Siemens and BlackBerry, and later, as an interaction designer at Autodesk witnessing the automation of her work by software like Maya, fueled both her fascination and concern with AI's potential.

During her presentation, Kem-Laurin walked us through some of her recent research, explaining how algorithms manipulate user behaviour and create hidden advantages for certain groups, often to the detriment of others. This manipulation, she emphasized, could subtly erode agency in our increasingly AI-driven world.

In particular, she voiced concerns about AI's growing impact on healthcare allocation, creditworthiness assessments, and job advertising/recruiting. "Al is trained on skewed data," she cautioned. In healthcare specifically, she emphasized the danger of AI perpetuating existing biases via affordability-driven models, exacerbating healthcare inequities for women.

Lubin urged the audience to shift their focus away from Silicon Valley hype and "sensational headlines, insisting that communities must ask, "How are WE being impacted by this technology?"

Questions

Michael Dila, founder of Oslo for AI, expertly moderated the evening. His background in design and deep philosophical outlook challenged the group's broadest assumptions about artificial intelligence. He noted the limited public participation in AI development and posed provocative questions like, "What if AI isn't what we think it is?

His query, "What does ChatGPT's rapid growth – hitting 100 million users in just two months – mean to you?" sparked lively debate.

Responses diverged. Some saw ChatGPT as a neutral productivity tool, while others argued that its popularity points to something deeper, including our desire to “talk to something” non-human. Lubin jumped in with a necessary dose of skepticism: "What's the rush [to 100 million users]?" she asked, challenging Silicon Valley's “move fast and break things” ethos.

Next, Dila asked the audience, "How has AI impacted your life?" One attendee’s candid response resonated deeply: "How has it not?" She described AI's subtle but pervasive influence, admitting, "I probably spend more money because of it."

Beyond the Boardroom

As the evening drew to a close, a shift was palpable. The abstract language of 'AI futures' shed its distant, corporate veneer, and instead was replaced by a tangible sense of responsibility. Participants of “Co-Creating AI Futures” left with new resources and a heightened awareness of their role in shaping this technology.

For myself, a writer who's adopted AI but fears its impact – not just on my job, but on the essence of the craft – this event was a potent reminder: the conversation around AI development must move beyond executive boardrooms. It needs to happen in informal spaces – coffee shops, community centers, messy but vital places – where communities directly impacted by the technology can grapple with its potential, question its biases, and steer its progress.


Author - Niko Pajkovic

Bonus Content

Essay on AI Countergovernance, Blair Attard-Frost

Blair Attard-Frost's Presentation

Kem-Laurin Lubin's Presentation


Blair and Kem are available for questions, comments and new connections:


Blair Attard-Frost

Email: blair@blairaf.com

Linkedin: https://www.linkedin.com/in/blairaf/

Kem-Laurin Lubin

Email: k4lubin@uwaterloo.ca or kemkramer@gmail.com


About Design Meets

Proudly sponsored by Pivot Design Group, founded by Ian Chalmers, DesignMeets is a series of social events where the design community can connect, collaborate, and share ideas. Join us at a DesignMeets event to network, learn, and be inspired.

Sign up for our newsletter to stay in the loop.