On Tech, Human Rights and Intersectionality

Some notes in Spanish and English

Apuntes sobre los desafíos de la violencia de género online inspirados por el financiamiento de Epstein a la ciencia y la tecnología.*

Por Paz Peña O.

Hay un elefante en la habitación. En ésta y en todas las habitaciones en Santiago o en otras ciudades de Chile y el mundo donde nos reunimos, cada tanto, a discutir sobre los diferentes matices de la violencia de género que ocurre bajo los soportes digitales. Hay un elefante invisible que apenas se deja sentir por los presentes y que con suerte nos libera espacio para nuestra presencia.

Como soy muy mala para improvisar y porque tampoco son tan comunes las oportunidades para hablar del tema y, por sobre todo, porque la fuerza de las últimas noticias demuestran elocuentemente los problemas inherentes de la industria digital dominante, quisiera tomarme unos minutos para hablarles de uno de los elefantes más importantes de la discusión mundial sobre violencia de género online.

Un elefante recorre Silicon Valley: es el fantasma de la misoginia.

En los últimos días, sendos reportajes en los medios escritos más importantes de Estados Unidos, han puesto en portada la estrecha relación que el mundo de la ciencia y la tecnología tuvo con el millonario Jeffrey Epstein.

Para las personas que no saben quién es este personaje, a mediados de este año, Epstein fue encarcelado y acusado por la fiscalía de Estados Unidos de gestionar una “vasta red” de mujeres menores de edad a las que presuntamente pagaba por servicios sexuales en sus mansiones de Manhattan y Florida. El modus operandi era que tres de sus empleados gestionaban sus encuentros sexuales con mujeres expresamente menores de edad, que provenían de hogares pobres o familias desestructuradas, las cuales eran contratadas para dar masajes pero que, pronto, terminaban siendo abusadas por Epstein y, a veces, por sus otros amigos millonarios.

Alrededor de 80 fueron los testimonios de mujeres recabados por la fiscalía. Epstein arriesgaba una pena de hasta 45 años pero, el 10 de agosto de este año, fue encontrado suicidado en su celda.

Ya en el 2008, Epstein había eludido los cargos federales por estos crímenes, gracias a un controversial acuerdo con la fiscalía, en el que aceptaba 13 meses de cárcel y ser inscrito en el registro federal de delincuentes sexuales.

Epstein financiaba de forma millonaria a científicos y centros de innovación y tecnología en Estados Unidos. De hecho, alguna vez dijo “solo tengo dos intereses: ciencia y coño” (haciendo una traducción al español castizo de science and pussy). Así, por ejemplo, era común que hiciera reuniones con científicos en su isla privada, como la que hizo sobre inteligencia artificial en el 2002.

Este financiamiento continuó en pleno 2008, cuando ya él mismo había reconocido ser un agresor sexual. Así, nos enteramos hace algunos días que el prestigioso MIT Media Lab del Instituto de Tecnología de Massachusetts, a través de su director Joi Ito, siguió recibiendo sus millonarias donaciones, con el forzado truco de hacerlas anónimas, además de invitarlo al campus (a pesar de su historial de agresor sexual) y consultarle del uso de los fondos.

Para las personas que no conocen el MIT Media Lab, recordar que es el laboratorio de diseño y nuevos medios fundado por Nicholas Negroponte, el mismo que creó luego ese programa marketinero que, entre colonialismo y tecnosolucionismo, buscaba brindar One Laptop Per Child. Para muchos el MIT Media Lab responde al brazo “académico” de Silicon Valley, que representa la denominada Tercera Cultura, la que busca juntar artistas, científicos, empresarios y políticos para crear humanidades con base científica.

Según los documentos obtenidos por el periodista del New Yorker, Ronan Farrow (sí, el mismo que destapó el escándalo de Harvey Weinstein que inició la ola #MeToo en Estados Unidos), Epstein sirvió como intermediario entre el MIT Media Lab y posibles donantes como el filántropo Bill Gates (sí, el de Microsoft) de quien aseguró USD 2 millones, y el inversor de capitales privados, Leon Black, de quien aceptó USD 5.5 millones. El esfuerzo por ocultar la identidad de Epstein era tal que Joi Ito se refería al financista como Voldemort, “el que no debe ser nombrado”.

Este escándalo en el MIT Media Lab ha llevado a que la discusión sea, increíblemente, sobre si la ciencia y la tecnología se puede o no financiar con dinero de fuentes “dudosas”. ¿Sobre las víctimas? Escuetas palabras de buena crianza. Porque de eso se trata el mundo Silicon Valley, finalmente: financiamiento por capitales de riesgo, un modelo que los centros de innovación como MIT Media Lab parecen aceptar sin chistar. El dinero al que mejor venda disrupción, innovación y todas esas cosas que se dicen en las Ted Talks.

Lawrence Lessig, amigo de Joi Ito, reconocido académico y creador de las licencias Creative Commons, escribió un largo artículo donde defiende a Ito -que alguna vez describió a Epstein diciendo que era “realmente fascinante”- diciendo que Joi Ito estaba convencido de que Epstein se había reformado y que era lo suficientemente brillante para darse cuenta de que podía perderlo todo. Más aún, Lessig supone que las donaciones de Epstein aceptadas por el MIT Media Lab no son un lavado de imagen para Epstein, pues Ito las forzó a ser anónimas. En su largo artículo no hay ninguna reflexión por las mujeres menores de edad víctimas de Epstein porque, de repente para el mundo dominante de Silicon Valley y de su brazo académico, la única víctima de Epstein es Ito.

Recuerdo haber terminado esa columna de Lessig, académico cuya obra me introdujo al mundo de la cultura libre, totalmente pasmada. Evgeny Morozov, académico e investigador, describió mejor mi sentimiento en una columna:

“No es raro que lo intelectuales sirvan como idiotas útiles para los ricos y los poderosos, pero, bajo La Tercera Cultura, esto se lee como un requisito de trabajo”.

Silicon Valey

Meredith Whittaker, científica investigadora de la Universidad de Nueva York, cofundadora y codirectora del AI Now Institute, tuiteó a propósito de esta deriva en la conversación sobre Epstein y el MIT Media Lab algo muy significativo:

“Las contorsiones mentales de los #ChicosListos oficiales de la tecnología, usando párrafos para decir lo que se podría en una sola oración: que el abuso y la exclusión de mujeres y niñas es un daño colateral aceptable en la búsqueda de la INNOVACIÓN. La crisis de diversidad en la tecnología no es una sorpresa”.

Estas simples palabra son, justamente, el elefante en Silicon Valley que todo el mundo sabe pero que siempre es doloroso y decepcionante aceptar: los cuerpos de mujeres y niñas, la integridad de sus vidas como sujetos y como parte de comunidades, no importa.

No existen, ni siquiera en una discusión que las atañe directamente como la de Epstein. No está en la ecuación de la innovación, salvo como un add-on que se baja de “la nube” y que trata de parchar errores que cuestan salud mental, vidas y hasta democracias.

¿Que el modelo del engagement ha dado pie a intervenciones dirigidas en periodo de elecciones? Ups, pues hagamos otro algoritmo que lo resuelva.

¿Que las decisiones de los sistemas de inteligencia artificial pueden perjudicar más a personas por raza y clase social? Ups, nos reuniremos en San Francisco a hacer unos principios éticos.

¿Que se han dado cuenta con el escándalo de Cambridge Analytica que explotamos sin permiso sus datos personales para venderlos a quien se nos plante y perfilarlos, clasificarlos y valorarlos sin ninguna transparencia? Ups, que ahora van a tener más botones de control de privacidad y asunto resuelto.

Los add-ons son el costo colateral con los que Silicon Valley trabaja: como Joi Ito dice en una de sus charlas Ted, en el vértigo de las tecnologías digitales del “despliega o muere” (deploy or die), no hay espacio para la reflexión crítica sobre los efectos de esas tecnologías desplegadas. Para Ito, las tecnologías digitales son la visión personal de un individuo emprendedor, aquí y ahora, no la consecuencia de una reflexión de una comunidad diversa.

Lo mismo ocurre con la violencia de género. Todos los escuetos avances que se han logrado con las plataformas son en forma de add-on.

Y QUE NO QUEDEN DUDAS. Que si hoy las grandes plataformas respondan en algo a actos de violencia de género, es solo gracias a la presión de las comunidades de feministas organizadas. Ha sido una lucha de años, de un nivel de desigualdad tremendo, con un abandono completo por parte de los Estados, para lograr que las plataformas transnacionales atiendan en un porcentaje mínimo las necesidades de las personas víctimas de nuestro continente.

Pero ocurre que, en un mundo de adds-on -donde pronto inventarán uno para saber si un donante se reformó o no de ser un predador sexual y, ¡santo remedio!– a veces el elefante enciende todas las luces y es simplemente imposible no verlo en cualquier sala.

El escándalo MIT Media Lab / Epstein que, por lo demás, será muy pronto olvidado, a, al menos, lanzado unos rayos de claridad para hacer esta pequeña presentación hoy sobre los desafíos de la violencia de género online en Chile y, me atrevo, en muchos otros países de América Latina:

  • Sí, necesitamos políticas públicas que, más allá del punitivismo penal, se conecten con la amplia agenda de derechos de las mujeres y de género para comprender mejor el fenómeno y trabajar en distintas dimensiones un problema altamente complejo, que ataca muy diversamente dependiendo del punto de vista interseccional.
  • Sí, necesitamos una mirada de derechos humanos a la violencia de genero online, tanto al comprender su daño, como al pensar en respuestas que, por ejemplo, no afecten a un vector fundamental de la libertad de expresión como es el anonimato.

Y sí, estamos en un sistema patriarcal que ya es de facto una imposición violenta, donde la “violencia de género” no es una excepción a la regla. Cómo no reconocerlo, si el caso Epstein-MIT Media Lab es una muestra más de que hay cuerpos que no importan.

Por eso hay que rescatar la potencia creativa y emancipatoria del feminismo para pensar y desarrollar una tecnología digital distinta y colectiva. No necesitamos necesariamente más mujeres, necesitamos más feminismo en la tecnología. Necesitamos una tecnología que deje de descansar, como si nada, sobre la destrucción de cuerpos que no importan, como podrían ser los de bio mujeres y niñas, queers y trans.

Escándalo tras escándalo, el poder de vender espejitos de la industria cultural de Silicon Valley es cada vez menos eficaz. En ese vacío creciente, hay una latencia que puede ser pura creatividad para construir tecnologías digitales y usos emancipatorios que, de verdad, enfrenten la misoginia y el odio con el fragor del feminismo del sur.

Muchas gracias.

:::::

*Texto escrito a propósito del conversatorio “Violencia de género en línea: diagnóstico y desafíos”

Artificial Intelligence

The organizations signing this document are part of the consortium belong to “Al Sur”, an organized group from civil society in Latin America that seeks to strengthen human rights in the digital environment. The public consultation on “Ethics and Data Protection in Artificial Intelligence: continuing the debate” promoted by the ICDPPC (International Conference of Data Protection and Privacy Commissioners) is a new opportunity for Global South and Latin American perspectives to be part of the debate around ethics and data protection in the context of Artificial Intelligence (AI)*. We believe that the considerations in this document could complement in depth and complexity some aspects of the ICDPPC declaration.

In this regard, we appreciate that the statement presented by ICDPPC recognizes that the development of AI increasingly threats respect for rights such as privacy and data protection and that its development must be complemented with ethical and human rights considerations. Responding to this greater challenge, it seems crucial that ICDPPC has detected the need for data protection and privacy authorities to work together with other human rights authorities in order to develop perspectives that respond the complexity demanded by Artificial Intelligence systems.

And it is precisely in this context of recognizing the complexity of scenarios that Artificial Intelligence presents, in addition to the unequal distribution of power among all interested parties involved in the AI developments and outcomes, that the signatory organizations suggest considering the following aspects in some of the principles proposed by the ICDPPC.

Opting for the use of the international human rights framework to assess the effects of AI

We celebrate the incorporation of the idea of ethics in a coordinated way with the concept of “privacy by design”, which has gained popularity thanks to the General Regulation of Data Protection (GDPR) of the European Union. However, we are cautious about the generality of the concept “ethics by design” and, therefore, its potential danger of being a battlefield coveted by values of dominant cultures from the Global North that do not necessarily respond to cultural diversity. In this regard, we join the call of specialists such as Eileen Donahoe (Executive Director, Global Digital Policy Incubator, Stanford University) when she states that “our existing human rights framework is an invaluable lens through which to assess the effects of AI on human beings and humanity”.

Using the existing framework internationally agreed on human rights offers several advantages, among which it is found that in many countries -both in the North and the Global South- there are already advanced legislations on the matter, in addition to standards that have been developed by regional systems to protect fundamental rights. All this allows us to be better prepared to face AI impact on the exercise of multiple rights of individuals and communities, such as the civil, political, economic, social and cultural rights.

Explicit recognition of groups in particular condition of vulnerability due to AI systems

According to available evidence, today it could be affirmed that -as the ICDPPC declaration recognizes- the use of Artificial Intelligence has an impact not only on individuals, but also significantly on society groups. However, it seems fundamental to us an explicit acknowledgment in the declaration that there are “groups in a situation of special vulnerability” regarding the harmful effects on their human rights by AI systems.

In Latin America and the Caribbean we observe with concern several cases regarding this point: for example, the use of the PredPol Software by the Ministry of the Interior of Uruguay in order to identify sections of the city where crimes are most likely to be committed, has been questioned by local and international organizations that have expressed that these tools “tend to replicate the biases of training data and the historical power dynamics between law enforcement and minority or underprivileged populations, and that they are used to justify police presence in marginalized areas”. Similar criticisms have had systems that aim to automatically predict adolescent pregnancies in the province of Salta, Argentina, as well as the recently launched system to predict the risk of social vulnerability in childhood and adolescence in Chile.

The explicit recognition of the situation of vulnerability of certain social groups, on the one hand, makes more evident the need for both companies and policymakers to take a contextual look at the effects of AI in each of their countries and, on the other, to allow to the same data protection authorities to compare and understand the effects on the most vulnerable communities of our societies.

The responsibilities of States should be explicit when they use AI systems to facilitate public policies

Likewise, we believe States should be explicit in the caring of human rights whenever an AI system is purchased, designed and/or implemented to define its public policies, as they are the main guarantors of the strengthening of these rights. This is particularly relevant because, as we have seen in the previous point of this document, human rights are undermined when many of these systems end up automating discriminatory policies towards particularly vulnerable populations.

In Latin America this is particularly worrisome due to little or no transparency concerning how data has been collected and the explicit authorization of data holders (or their legal representatives) for data secondary uses, as those implemented in many AI systems. Likewise, in a regional context of relative normative weakness respecting personal data protection standards and authorities in charge of making this protection effective, the databases purification or the possibility of the affected citizen to complain about the outcomes of AI is rather scarce or null.

Recognize tensions that AI introduces to the traditional data protection system and, from that point, advance in agreed solutions

As stated in the statement, it is important that the information be delivered in a timely and intelligible way to individuals when they interact directly with an AI system or when they provide personal data to be processed by such systems. However, we believe that these solutions are based on a vision about the freedom of the individuals that does not consider their inequality of power, knowledge and resources. If we incorporate these last elements to the analysis, we can realize that the capacity of decision of the people is strongly limited. That is why we consider it fundamental to make explicit at least two aspects that should complement the traditional approach on the topic in the context of AI:

  • The uncertainty of the principle of purpose: For authors like Zeynep Tufekci, companies do not have the capacity to inform us about the risks we accept, not necessarily because they are acting in bad faith, but because computational methods are increasingly powerful -such as machine learning- and they work like a “black box”, that is, not even those who have access to code and data could know the consequences that a system has on our privacy. In this sense, fundamental matters such as the principle of purpose of data collection may end up compromised.

  • The difficulty of informed consent: In addition, since the operation of this type of systems is highly complex even for their own developers, it seems unfair that individuals bear the responsibility of informing themselves and understanding matters as arid as its impact on their human rights. In this context, it is important to recognize that traditional forms of data protection such as informed consent by users no longer have the same supposed efficacy in complex systems such as AI. Moreover, informed consent can be used as an excuse to legitimize the impact on privacy and data protection. Therefore, obligations to respect rights must be fulfilled regardless a consent is obtained. In the same way that happens in other areas where inequality of position between parties has been recognized (labor law, consumer law, among others), consent must be considered as a requirement that joins other duties and not as something that replaces them.

With this background, it is important to recognize traditional data protection systems weak points, and therefore commit ourselves to advance in agreed mechanisms that allow strengthening the supervision of AI systems, set boundaries with respect to our human rights and implement mechanisms of public transparency.

Be explicit on the activities assumed by authorities supervising AI systems

Consistent with what was stated in the previous point, we recognize the vital importance that authorities have today supervising AI, as they are the engine to promote accountability of all relevant stakeholders in these systems. Thus, we believe it is very important to be explicit regarding three aspects of the work by State supervisor authorities:

  • It is important that these authorities (whether data protection authorities or similar) have powers authorized by law and, therefore, with a consistent budget and trained human resources, in order to address the complex study scenarios that AI demands in the protection of Human Rights.
  • Likewise, authority’s independence must be expressly guaranteed.
  • Responding to the complexity of the AI scenarios and the inequality of budgets regarding accountability -especially between North and Global South- transparent cooperation mechanisms must be established among authorities, academia, private sector and civil society in order to facilitate the discussion on evidence and impacts.

States that use AI systems to define their public policies, in any of their areas, should also have mechanisms for transparency, auditing and accountability -by independent committees- from the development of the system concept, its tendering, databases used to feed the system and its development and implementation over time.

Point out the oligopolistic forces of the market and its effect on AI

Although the statement acknowledges “the potential risks induced by the current trend of market concentration in the field of artificial intelligence”, we believe it is fundamental to recognize and expressly ensure the balance of power of all parties involved in the systems of Artificial Intelligence and, in particular, to be explicit about the risks people could face due the dominant position that a handful of companies have reached offering digital services (which are mainly powered by data from their users) such as Facebook, Google, Amazon, among others.

Due the fact that to improve its effectiveness AI needs large amounts of data, it is worrisome the domain level that these companies currently have in the market regarding the exploitation of the personal data of their users. This domain, which escape traditional logics of evaluation by competition rules as their services offers are diverse, it is aggravated by the participation of such companies in multiple vertically integrated markets, as well as by the public recognition that some of them share personal data of their users with other companies.

Likewise, whether it is already difficult for countries in the Global North to achieve a certain level of accountability by these companies, the task becomes even more difficult for countries in the Global South that often lack a strong institutional framework in terms of competition and consumer protection. We urge ICDPPC to explicitly recognize this market reality regarding companies developing and/or implementing IA, and therefore be able to create special control and accountability mechanisms for this type of oligopolies in Artificial Intelligence.

Security in AI systems and in its outcomes

The adoption of digital security mechanisms according to human rights standards is a matter of the utmost importance. In this sense, we understand that adopting a rights perspective to define “digital security” implies that the center of analysis shouldn’t be concepts such as “national interest”, “national security”, “economic interest” or similar. On the contrary, digital security should focus on the ability of people to interact with technology in a way that is beneficial for their needs and preferences, and without exposing them in a disproportionate manner to risks of controlling their autonomy and identity.

A first aspect of digital security in AI systems is related to the fact that actors must incorporate practices that guarantee integrity, confidentiality and availability of the system, in order to avoid malicious interference in the system feeding data and its decision making, or the deviation from the original purpose of its use.

In addition, they must ensure that people who can be impacted by AI decisions are provided with necessary tools to critically understand and analyze it and determine if their use could contribute or harm their life situation. In this sense, it cannot be ignored that millions of people in Latin America and the Caribbean – and in the rest of the world – live in conditions of poverty and low educational level, and therefore their risk of marginalization could be increased by the application of AI. Moreover, they could even not be able to access information or understand the consequences of such systems. This social inequality must also be addressed by the ICDPPC as part of a real effort to ensure the safe use of AI.

*For the purposes of this report, we refer to the full spectrum of different intelligences and data-based processes with the name of Artificial Intelligence or AI: from automated and algorithmic decision-making; to machine learning, including deep learning that mimics biological neural networks, among others.

:::

This document (coordinated by Paz Peña for Al Sur) was signed on January 25, 2019 by the following organizations: – Derechos Digitales. Latina America. (derechosdigitales.org) – Asociación por los Derechos Civiles (ADC). Argentina. (adcdigital.org.ar) – Hiperderecho. Peru. (hiperderecho.org) – IPANDETEC. Panama. (ipandetec.org) – Red en Defensa de los Derechos Digitales (R3D). México. (r3d.mx) – TEDIC. Paraguay. (tedic.org) – Fundación Karisma. Colombia. (karisma.org.co) – Coding Rights. Brazil. (codingrights.org) – Idec. Brazil. (idec.org.br)

By Paz Peña and Joana Varon

a feminist consent on the internet!

It’s strange to think that two of the most important discussions today are around the same concept: consent. In one hand, the whole #MeToo movement has helped to resurface in the public opinion an old and never overcome debate on sexual consent, and in the other, the political scandal of Facebook–Cambridge Analytica has demonstrated (again) the futile exercise to consent on the use of our data in datafied societies dominated by a handle of transnational data companies.

Nevertheless, while these two discussions are happening at the same time, bridges between them are almost nonexistent. Moreover, when we talk about our sexual practices mediated by platforms (sexting, dating apps, etc), the discussion on how these two types of consent collide and what complexities come after that are almost always ignored. For example, in the policy debate on NCII (non-consensual dissemination of intimate images), the lack of consent is either almost entirely seen as a sexual offense or as a mere problem of data protection and privacy.

In order to shed a light on the matter, we are launching today the research “Consent to our Data Bodies: Lessons from feminist theories to enforce data protection”. The goal was to explore how feminists views and theories on sexual consent can feed the data protection debate in which consent — among futile “Agree” buttons — seems to live in a void of significant meaning. Envisioned more as a critical provocation than a recipe, the study is an attempt to contribute to a debate on data protection, which seems to return over and over again to a liberal and universalizing idea of consent. This framework has already proved to be key for abusive behaviors by different powerful players, ranging from big monopolistic ICTs companies, like Facebook, to Hollywood celebrities and even religious leaders, such as the recent case of João de Deus, in Brazil.

On the other hand, feminist debates made it is clear that the liberal approach of individuals as autonomous, free and rational subjects is problematic in many ways, especially in terms of meaningful consent: this formula does not consider historical and sociological structures where consent is exercised. In this sense, a very rich question to pose for the data protection debate from a feminist perspective is “who has the ability to say no?”

In this context, Perez considers something fundamental: “it’s not just about consent or not, but fundamentally the possibility of doing so.” Also in this regard, it seems interesting to recall what Sara Ahmed (2017) says about the intersectional approach towards an impossibility of saying “no”: “The experience of being subordinate — deemed lower or of a lower rank — could be understood as being deprived of no. To be deprived of no is to be determined by another’s will”.”

If consent is a function of power, not all the players have the ability to negotiate nor to reject the conditions imposed by the Terms of Services (ToS) in platforms. In this framework, beyond “Agree” to the usage of our personal data, what most of people do is simply “Obey” the company’s will. Therefore, confronting the fantasy of digital technologies functioning as vehicles of empowerment and democracy, what we have are data societies where control is validated by a legal contract and a bright button of agreement.

The liberal framework of consent in data protection has been under scrutiny by important privacy scholars. Helen Nissenbaum asks for quitting the idea of “true” consent and, at the end, stop thinking on consent as a measure of privacy. She makes a call to drop out the simplification of online privacy and adopt a more complex context. Julie E. Cohen has a very similar approach. For her, to understand privacy simply as an individual right is a mistake:

The ability to have, maintain, and manage privacy depends heavily on the attributes of one’s social, material, and informational environment” (2012). In this way, privacy is not a thing or an abstract right, but an environmental condition that enables situated subjects to navigate within preexisting cultural and social matrices (Cohen, 2012, 2018).

From a contextual integrity framework to condition-centered frameworks, among others, the call of some of these scholars is to dismiss the liberal trap of “Notice and Consent” as a universal legitimating condition for data protection, and instead to protect privacy in the design of the platform rather than in the legal contracts.

Sadly, meanwhile legal contracts are still a mechanism for social control, privacy and feminist activists should be pushing for strong changes in both ways: design and consent in ToS. In this sense, we have sketched a “matrix of qualifiers of consent from body to data” in order to start thinking creatively and collectively ways to ensure strong and contextually meaningful data protection standards for all users.

matrix of qualifiers of consent!

The matrix shows that while some of the qualifiers are overlapping in the debates of both fields, the list of consent qualifiers present in data protection debates, such as in the European General Data Protection Regulation (GDPR), taken as a model for many privacy aware jurisdictions, falls short, disconsider some structural challenges and loosely compiles all qualifiers in one single action of clicking in a button.

What would be technical and legal alternatives if we we are up to think and design technologies that allow for tangible expression of all these qualifiers listed by feminist debates and, more important, consider that there are no universal norms if there are different conditions and power dynamics among those who consent?

We hope that the some of the finding from this research (available bellow) are just the beginning of a long and exciting feminist journey to collectively build a feminist framework for consent on the Internet. #FeministInternet

:::::

Full version of the research “Consent to our Data Bodies: Lessons from feminist theories to enforce data protection”, produced by Coding Rights with support of Privacy International and funding from the International Development Research Center is available here: https://codingrights.org/docs/ConsentToOurDataBodies.pdf