Digital Security

Trustworthy AI systems based on Knowledge Graphs and LLMs: Attacks and Defenses

Date
Reference
PhD Position – Thesis offer M/F (Reference: SN/MÖ/trustworthyAI/032025)

The performance of AI technologies relies on access to large datasets of good quality and on the training of an accurate model. This dependence on large data makes AI systems vulnerable to privacy attacks that can leak privacy-sensitive information, to adversarial attacks that can manipulate inputs or model parameters in order to tamper with the training process, and to fairness attacks that aim to modify the existing behavior of the model to induce some bias.