Skip to content

Interview Questions for Ioanna Papageorgiou, a Researcher at "Artificial Intelligence Free from Bias"

EU Funds Legal Scholar Ioanna Papageorgiou, Marie Skłodowska-Curie Fellow at Hanover University's Institute of Legal Informatics, as Part of the Pan-European NoBIAS Research Programme Under Horizon 2020.

Interview Questions for Ioanna Papageorgiou, Researcher at "Artificial Intelligence Without Bias"
Interview Questions for Ioanna Papageorgiou, Researcher at "Artificial Intelligence Without Bias"

Interview Questions for Ioanna Papageorgiou, a Researcher at "Artificial Intelligence Free from Bias"

In the digital age, the European Union's anti-discrimination law extends its protection to include digital algorithms, but challenges remain in enforcing this law and providing redress to victims. Recognizing this, the NoBIAS project, a pan-European research programme funded by the EU's Horizon 2020 science programme, is working to ensure AI actors debias, test, and oversee their models, and can be held accountable for harms due to algorithmic discrimination.

Led by Ioanna Papageorgiou, a legal scholar and doctoral researcher, the NoBIAS project is divided into three groups: understanding bias in data, mitigating bias in algorithms, and accounting for bias in results. The team is making strides in documenting bias in data through ontologies, developing causal methods for understanding bias, debiasing ranking methods on top of networks, building ensemble models for facial classification, and addressing legal issues related to bias mitigation.

One of the project's key findings is the use of activation steering and bias-aware decoding to reduce discrimination in AI-generated content. For instance, a method combining Dynamic Activation Steering (DSV) with a bias-aware decoder (BAD) has shown promising results. This approach adjusts model outputs to avoid stereotypical or biased answers and generates more neutral and inclusive responses, such as correcting biased answers in question-answering or reframing harmful language about marginalized groups.

The implementation, monitoring, and oversight of fairness-aware algorithms in the real world are crucial to assessing their broader equality impact in the long run. The NoBIAS project emphasizes the importance of appropriate bias mitigation strategies, which are linked to the main sources of algorithmic bias and are feasible depending on the state-of-the-art of AI and debiasing research.

Moreover, testing and monitoring a system's performance and fairness on an ongoing basis after deployment is essential. Discriminatory effects of AI can relate to diverging accuracy rates across different demographic groups and can also be present at various stages of the AI pipeline.

Policy guidelines and the adoption of adequate codes of conduct by private companies are also welcome. Creating a more diverse and representative AI ecosystem is an important and feasible strategy. As AI is used in crucial social contexts such as law enforcement, surveillance, employment, and housing decisions, it is essential to address the issue of algorithmic bias to uphold fairness and protect EU citizens.

References: [1] NoBIAS project website. (n.d.). Retrieved from https://nobias-project.eu/

In summary, the NoBIAS project is paving the way for a fairer and more inclusive AI ecosystem by researching and developing novel methods for AI-based decision-making without bias. The project's findings demonstrate that bias mitigation can be balanced with accuracy to reduce discrimination in AI-generated content, setting a promising precedent for the future of AI.

  1. The NoBIAS project, led by Ioanna Papageorgiou, is dividing its efforts into understanding bias in data, mitigating bias in algorithms, and accounting for bias in results to create a fairer AI ecosystem.
  2. A key finding of the NoBIAS project is the use of activation steering and bias-aware decoding to reduce discrimination in AI-generated content, such as adjusting model outputs to avoid stereotypical or biased answers and producing more neutral and inclusive responses.
  3. The implementation, monitoring, and oversight of fairness-aware algorithms in real-world applications are essential to assess their broader equality impact in the long run, as the NoBIAS project stresses the importance of appropriate bias mitigation strategies.
  4. To uphold fairness and protect EU citizens, the NoBIAS project advocates for policy guidelines, adoption of adequate codes of conduct by private companies, and the creation of a more diverse and representative AI ecosystem, considering its use in crucial social contexts like law enforcement, employment, and housing decisions.
  5. The NoBIAS project's research and development of novel methods for AI-based decision-making without bias are setting a promising precedent for the future of AI, demonstrating that bias mitigation can be balanced with accuracy to reduce discrimination in AI-generated content.

Read also:

    Latest