Cookies

We use cookies to give you the best experience on our website. You can find out more about which cookies we use or disable them in the settings. - Go to cookie settings

Go to content
News
FAIR AI Attribution (FAIA): setting clear standards for human-created and AI content

FAIR AI Attribution (FAIA): setting clear standards for human-created and AI content

With nearly 90% of all online content projected to be AI-generated by 2026, the increasingly blurred lines between human and machine-made creations could have serious repercussions.

To guarantee transparency on the role of AI in digital content, FAIA is developing an open framework for AI attribution. In this article, Sylvia Le Dévédec (Leiden University) and Sebastian Posth (Liccium) provide greater insight into this project, which is supported by Topsector ICT and SIDN Fund.

Published 15 September 2025

Tags

Van Call tot Action
KIA D
AI
Calls

Expert

Sylvia Lé Dévedec

Researcher and assistant professor

Leiden University

Expert

Sebastian Posth

Founder

Liccium

From Call to Action: Topsector ICT promotes public-private partnerships that drive the Dutch innovation landscape. In recent weeks and months, a variety of grant-funded projects have been launched within its Knowledge and Innovation Agenda for Digitalisation (KIA-D). In a series of five articles, we will shed light on inspiring examples of businesses, knowledge institutes and the public sector collaborating on digital solutions to social and economic challenges. This first article shines the spotlight on:

The FAIR AI Attribution (FAIA) project

‘As GenAI continues to evolve, it is becoming increasingly difficult to detect which content is created by humans and which is produced or edited by AI. This lack of transparency has serious repercussions not only for public trust and fair reuse of creative content, but also on research integrity, the credibility of digital media and legal and regulatory compliance,’ says Sylvia Le Dévédec, researcher and assistant professor at Leiden University.

As a cellular biologist, light microscopist and medical researcher and safety imaging expert, Sylvia Le Dévédec uses imaging as a core technology to better understand disease mechanisms. As the manager of the High-Content Imaging Platform at Leiden University’s Cell Observatory, she is a fierce advocate of open sciences and the FAIRification process at Leiden’s imaging facility.

About the FAIR principles

FAIR’s four principles describe how research data – encompassing both data and meta data – should be accessible and usable. The ‘F’ stands for Findable: data and meta data should be easy to discover. This could involve using clear key words and persistent identifiers. The ‘A’ stands for Accessible: data should be retrievable under clear conditions, such as open access or well-defined access rights. The ‘I’ stands for Interoperable: data should be in a standard format that can be easily integrated with other data sets. And the ‘R’ stands for Reusable: data should be well described and labelled with meta data, so that it can be reused by others in a different context or for future research. Find out more about the FAIR principles.

‘The FAIR principles are about more than open data. They’re about ensuring data is discoverable in the long term, well defined and usable. In my case, I need meta data and properly annotated data, with a particular emphasis on large bio-imaging data sets,’ says Sylvia. As AI has made it increasingly easy to manipulate data, she has seen how imaging users have become acutely aware of the importance of demonstrating the reliability, credibility and integrity of data.

New digital content infrastructures

Some months ago, Sylvia came into contact with Sebastian Posth, founder of Liccium and advocate for transparent, verifiable digital media ecosystems. Liccium is an innovative rights management platform that develops tools and standards for content creators and rights holders. ‘Our tools enable them to digitally sign and protect their original works, assert authorship and metadata and participate in the emerging AI and digital content infrastructures on fair and trustworthy terms,’ Sebastian explains.

This ensures trust in ownership, attribution and authenticity of digital media content. Liccium focuses on science and research alongside other domains and industries, including the music industry, publishing, media companies and social media platforms.

Sebastian co-initiated the creation of the International Standard Content Code ISO 24138:2024 and designed projects such as Creator Credentials and a joint registration system for verifiable opt-outs for AI training data, FAIR AI attribution and open content registers. ‘With a background in publishing, media and rights management, I turn complex technologies into practical systems for provenance, attribution and rights transparency. That’s how I set up a trust infrastructure, collaborating with publishers, AI developers and institutions on open, interoperable frameworks for digital content integrity.’

The FAIA project: how it all began

The FAIA (FAIR AI Attribution) project is a collaboration between Leiden University, the GO [Global Open] FAIR Foundation and Liccium. Sylvia and Sebastian are supported by their colleagues Kristina Hettne and Alessa Gambardella (Leiden University) and Erik Schultes (GO FAIR).

FAIA is developing an open framework for AI attribution to guarantee transparency on the use of AI in digital content. The framework features a set of transparent, publicly accessible guidelines, standards and tools that assert how the role of AI in output creation is attributed and justified. ‘The absence of standardised methods for attributing AI content is fuelling the spread of disinformation and fake news. And while providers in the EU are legally obliged to ensure that synthetic content is detectable as artificially generated [under Article 50 of the AI Act, eds.], there’s still no scalable, machine-readable mechanism supporting transparency. Our project hopes to fill the gap,’ explains Sylvia.

Losing embedded meta data
Every creator, platform and AI provider can instantly generate an ISCC fingerprint of a digital asset using the internationally standardised algorithm in ISO 24138, known as the International Standard Content Code (ISCC). However, information fields are often stripped from messages when shared online, which can often delete the embedded meta data.

The framework being developed by FAIA consists of a standard vocabulary to disclose the role of AI in digital content creation alongside a flag system to signal the nature of the content.

About the FAIA vocabulary
The FAIA vocabulary was developed to provide a transparent and consistent record of the role of AI in the creation process, offering a standard method of describing content creation, transformation or generation in workflows involving both humans and AI. FAIA is independent of domain-specific reporting standards, but does align with them.

Activity types provide a detailed description of what was done to the content, such as generation, editing, summarisation, translation or transformation. FAIA fills in the gaps in IPTC Digital Source Type (for news and photographs) or STM AI Classification (for academic publications) using generic, media-independent codes such as generation (AI produces the primary content), contribution (partial creative input), enhancement (structural/content extension), transformation, analysis and refinement. The project also seeks to develop a suitable classification.

For AI systems, FAIA also includes a set of meta data to describe the system used: tool (name of the interface), model, version and provider. ‘This approach supports consistent descriptions and transparency on AI use in different media sectors and content types,’ says Sebastian.

About the flag system
There are three colours of flags. Sebastian: ‘Green denotes Human-Created Content, or HCC for short. This means that the content, which could be video, photo, sound fragments or text, is made or edited only by humans. The creator may have used digital tools, but no AI was involved at any stage of the creative or editing process.’

The yellow flag denotes AI-Assisted Content (AAC). This means that people are the primary authors of the content, but AI systems were used during the process. ‘This may include suggestions, generating fragments or refinement steps that are performed under human supervision,’ he explains.

Lastly, the blue flag denotes AI-Generated Content (AIG) that is generated predominately or entirely by an AI system. ‘This means that AI is the main creative agent,’ he says.

The FAIA flags are linked to the content-derived identifier (CDI) or ISCC fingerprint. Declarations of ISCC codes and FAIA flags are digitally signed by the rights holder and published in publicly accessible registers. ‘The FAIA flags should ultimately be accessible to third parties, so that a viewer, listener or reader can easily discover whether the digital content they’re consuming has been made by a human or a machine or is real or fake,’ says Sylvia.

Support from Topsector ICT and SIDN Fund

The FAIA project is funded by SIDN Fund’s and Topsector ICT’s ‘Responsible AI in Practice’ call, which has awarded funding to 10 innovative research projects. It is the first call to address the third pillar of the Knowledge and Innovation Agenda for Digitalisation: reflection on digital innovation technologies (DITs), concentrating on the ethical development and use of digital technologies. This specific call focuses on the development of practical frameworks, conditions and design – including secure by design – principles for ethical AI and AI applications and derived solutions. Find out more information about this call.

Read as well:

These are the 10 awarded projects from the 'Responsible AI in Practice' call

SIDN fund and Top Sector ICT have awarded grants for several innovative research projects that provide real-world solutions for...

Read on

Open source

With the technology already in place, last year saw the completion and publication of the international standard for identifying digital content – ISO 24138:2024. ‘Once a structured, interoperable and verifiable framework for FAIR AI Attribution has been designed and implemented, it can be used at scale. The first prototype will focus on the role of AI in academic papers, with the goal of extending this to other contexts at a later stage. The project outcomes will be open-source and implemented through the Liccium platform,’ says Sebastian.

‘The FAIA project also ties in with how we’re addressing the growing use of AI tools among students. I supervise numerous student literature studies, but the widespread use of AI tools has led us to reconsider the very concept of literature research. I think the FAIA project could also have a significant role to play in this,’ says Sylvia.

You can find more information about this project here: FAIR AI Attribution (FAIA) | FAIA – FAIR AI Attribution.

  • Privacy overview
  • Necessary cookies
  • Third-party cookies
  • Additional cookies
  • Privacy and cookies

This website uses functional, analytical and tracking cookies to improve the website

Strictly Necessary Cookies must be enabled at all times so that we can save your preferences for cookie settings.

This website uses Google Analytics and Hotjar to collect anonymous information, such as the number of visitors to the site and the most popular pages.

Keeping this cookie enabled allows us to improve our website.

This website uses the following additional cookies/services:

Read more about our cookies