Typefully

The Seattle Worldcon Controversy: AI, Bias, and the Trouble with Tech in Public Life

Avatar

Share

 • 

7 months ago

 • 

View on X

Would you ever think that a long running science fiction convention would be at the centre of controversy surrounding #GenAI and ethics? Seattle Worldcon 2025 is facing a boycott from sci-fi authors, gamers, artists, and other fans, with #ChatGPT at the centre of discussion. If you’re not familiar with Worldcon, you might have heard of this as the conference where the Hugo awards for science fiction and fantasy writing are presented. It’s a big time event.
It seems that sci-fi authors are happy to write fiction about artificial intelligence, but less happy when AI threatens their livelihood. This is understandable. Many in the community were already livid that their work had been used to train LLMs, such as those powering ChatGPT, without their permission. The further decision of the #Worldcon 2025 organisers to use ChatGPT as part of a conference participation vetting process has proved to be one step too far. What’s happened recently has been almost a textbook example of how to not read the mood of a vocal community, all coupled with a complete breakdown in sensible communication.
At the centre of the discussion has been a review process to determine which Worldcon attendees can take to the stage as panellists for discussions. Worldcon does review potential panellists to ensure a diverse programme, to avoid controversial views, and to ensure quality, amongst other reasons. Many potential panellists will not make the cut. For instance, in 2024, Game of Thrones author George RR Martin revealed that he had not been selected to participate. But a visibly fair process here is key.
So, how does the panellist selection and vetting process traditionally work? Before 2025, a group of volunteers would look up all potential participants online, they’d search for and compile information about them, and then they would use this compiled information to make an informed decision about who to include. A process like this is never perfect. Human judgement can be inconsistent and biased.
In 2025, Worldcon organisers said they received over 1300 applications from people who wanted to take part in panel discussions. The organisers announced a change to the vetting process. This year, part of the search process to gather details on possible panellists was automated. Applicant names were put into ChatGPT, which then conducted the previously human-led series of web searches. The collected and compiled information was further reviewed by the human volunteers as before.
What then was the controversy? With many in the Worldcon community already unhappy about GenAI, the new vetting process created an immediate backlash. The apparent use of #ChatGPT to assess who could go on stage made existing frustrations worse. Community discussions noted how LLMs can be unreliable and reflect the biases in the data they were trained on. Even with human oversight, some people feel that this new ChatGPT supported approach undermines trust.
It is still not completely clear to me exactly what role ChatGPT has played in this vetting process. I think all that has happened is that the previous searches completed by hand have been done in exactly the same way, only now ChatGPT has been used to save volunteer time and collect together the search results. But, what if this process not only did the searching, but was also used to summarise responses, and potentially make recommendations? If there was bias in the underlying LLM, this could easily be reflected in the data that was provided to volunteers.
There are other issues to consider with data collection as well, such as what happens if someone has a common name, or if information available online about them is limited? In many ways this mirrors concerns about reviewing online information relating to job candidates. Human volunteers having access to complete and correct information is vitally important, regardless if GenAI is used, if a standard AI-free script is programmed to scrape information from websites, or if this data is gathered by hand.
The conference organisers did respond to the criticism, but this response was too little and too late. Some have even suggested that the organisers used ChatGPT to write their public response, which they said felt impersonal and missed the tone of the community.
This controversy has all happening alongside wider calls to boycott the upcoming Worldcon event. Some attendees are objecting to current political decisions in the United States especially around LGBTQ+ rights and inclusion. Worldcon is historically a space for shared values and progressive ideals and many members feel the organisers have not responded clearly enough to their concerns. These tensions are adding to a growing sense of division.
The main issue here is not just about the technology. It is about trust, openness, and ethical responsibility. Adding a new tool does not remove bias, or the perception of bias. And if people feel shut out of the conversation, then good intentions can never be enough.
The current Worldcon problems can’t be fixed solely by using technology. Human concerns are at the heart of the discussion. What is needed is clear communication, active listening, and an understanding that ethical decision making should be built into how a conference like Worldcon operates from the start.
Technology used in public decisions is never neutral. This is not just a problem for one convention. It is part of a much bigger conversation about how we use these #GenAI systems and who gets to decide. But, ultimately, it’s the communication or lack thereof, that has plagued the run up to Seattle Worldcon 2025.
Avatar

Thomas Lancaster

@DrLancaster_1

Computer Science academic. Technology and generative AI enthusiast. Known for research into academic integrity and contract cheating.