Research Ethics & The Paradox of Visibility
Why 'more visible' doesn't always mean 'better' — and how we navigate this complexity
The Problem with Simple Visibility
It's tempting to assume that making marginalized voices more visible is inherently good. But visibility is complicated. For some communities — particularly those with protected characteristics or facing systemic discrimination — increased visibility can mean increased targeting, surveillance, or unwanted attention. The question isn't just about making things visible, but *for whom* and *on whose terms*.
The Visibility Paradox
- •**Visibility ≠ Empowerment**: Being seen doesn't automatically translate to being heard or respected. Sometimes visibility exposes communities to harm.
- •**Context Matters**: A forum discussing LGBTQ+ issues in a region with criminalization laws faces different risks than one discussing gardening tips.
- •**Platform Power**: When researchers, journalists, or platforms 'discover' a community, they often reshape it through their own interpretive frameworks, potentially erasing the community's self-understanding.
- •**Algorithmic Amplification**: Making content visible to algorithms can subject communities to content moderation, advertising profiling, or data harvesting they explicitly opted out of.
The Right to Remain Invisible
- •**Opting Out Matters**: Platforms like Mastodon explicitly reject indexing and data ingestion by design. This is not a failure of discoverability — it's a deliberate choice for community autonomy.
- •**Strategic Obscurity**: Some communities use obscure platforms, coded language, or gated access precisely to avoid surveillance. Making them 'findable' undermines their safety strategies.
- •**Informed Consent**: Just because content is 'public' doesn't mean subjects consented to academic analysis, journalistic reporting, or AI training. Legal access ≠ ethical use.
- •**Community Self-Determination**: Communities should decide *if*, *how*, and *to whom* they become visible, not researchers or platforms.
Who Benefits from Visibility?
- •**For Policymakers**: Do our findings serve the community's interests, or do they enable surveillance, regulation, or 'intervention' that communities don't want?
- •**For Journalists**: Are we amplifying community voices, or extracting narratives for institutional media while communities remain uncompensated and un-consulted?
- •**For Researchers**: Does making data 'discoverable' advance academic careers at the expense of community trust and safety?
- •**For AI Platforms**: Are we inadvertently enabling data harvesting for commercial LLMs that communities explicitly rejected?
Our Ethical Commitments
- •**We Default to Privacy**: Unless there's a clear community benefit, we prioritize obscurity over visibility. Aggregates, not individuals.
- •**We Ask First**: Where possible, we engage moderators and community representatives *before* including their spaces in our research.
- •**We Enable Removal**: Any community can request exclusion at any time, no questions asked. We maintain a transparent Takedown Protocol.
- •**We Reject Surveillance Logics**: We do not share data with platforms, law enforcement, or actors seeking to monitor or target communities.
- •**We Resist Simplification**: We present findings with full context, acknowledging contradictions and refusing to flatten complex realities into neat policy recommendations.
Missing Data as Critical Data Practice
- •What we *don't* collect is as important as what we do. Following scholars like Jonathan Gray ('Public Data Cultures'), we recognize that **'missing data'** often reflects strategic choices by marginalized groups:
- •**Absence as Resistance**: Not appearing in datasets can be a form of refusal against extractive research practices.
- •**Silence as Strategy**: Communities may withhold information to protect themselves from hostile actors.
- •**Gaps as Evidence**: The lack of data from certain groups may indicate systemic exclusion, surveillance fears, or platform inaccessibility.
- •We document these absences and their possible meanings, rather than treating them as methodological failures to be 'solved'.
Recommended Reading
- •**Gray, Jonathan (2024)**. *Public Data Cultures: How Civil Society Uses and Practices Data*. MIT Press. — Explores how marginalized groups negotiate visibility and representation through data practices.
- •**Costanza-Chock, Sasha (2020)**. *Design Justice: Community-Led Practices to Build the Worlds We Need*. MIT Press. — On centering community agency in research design.
- •**Noble, Safiya Umoja (2018)**. *Algorithms of Oppression*. NYU Press. — How visibility algorithms can harm marginalized communities.
- •**Benjamin, Ruha (2019)**. *Race After Technology*. Polity. — On the risks of making marginalized communities legible to systems of power.
Our Commitment
We will never prioritize 'research impact' over community safety. If our work makes someone vulnerable, we've failed — not as researchers, but as humans.
Questions about our ethical practices?
Contact our research ethics teamLast Updated: November 2025