This self-writing AI shows that racist and homophobic biases can be avoided

An artificial intelligence that generates texts in French impresses with its quality. So far, the algorithm appears not to have been influenced by racist or sexist bias.

The instructions are simple: ” write the beginning of a sentence then click on “Generate”. Cédille is responsible for writing the rest “. And Cédille, an artificial intelligence (AI) created by the Swiss agency Coteries, takes care of its mission very very well. No matter what example you give to start the text, the artificial intelligence manages to fill the rest of the block in an impressive way.

Cédille, which is available in beta since November 9, 2021, is a writing generation tool in French. Many examples of such artificial intelligence have existed for years: we remember Clever Bot, Google AI, or the most recent Copilot, Github software designed to help developers code.

But Cédille has the particularity of doing all this in French. Having been trained with a model in French, and not in English as it is most of the time, Cédille became one of the first artificial intelligences to function convincingly. You can try it here.

Cédille works in many different cases

To test the effectiveness of Cédille, Numerama used the beginning of the first sentence of our article Apple announces the possibility of repairing an iPhone yourself thanks to official parts, kits and guides, “to from 2022, you will be able to repair your iPhone yourself “. And the rest, invented by Cédille and which appears in bold, is rather convincing.

The rest of our article, written by Cedille // Source: Cedilla

Although the sequel does not necessarily make sense with the beginning of the sentence, Cédille’s work is impressive. Especially since, on the interface, we are told that AI works for many different styles of text: holiday summaries, Christmas stories or even cover letters. Cédille even suggests composing article paragraphs for us, or finding article ideas:

An introduction imagined by Cédille // Source: Cedilla

But it is not only on factual texts that the use of Cédille is appreciated. On Twitter, authors have also raved about the skills of AI.

Twitter user @nolwenn_pamart also explains to have given bits of texts to Cédille, and to have been impressed by the displayed result. ” They just showed me Cédille, a software that completes sentences, and I think he writes better than me », She writes on the social network.

How to prevent AI from being biased?

But the last few years have shown it: who says artificial intelligence, also says bias. the machine learning, a technique with which algorithms are trained, reproduces the biases that already exist. However, they are often sexist, racist, or even both.

So, how to avoid the slippage of artificial intelligences, especially when they respond to the beginnings of sentences, written by users? The Coteries team, which developed Cédille, explains on his site have put in place safeguards, supposed to prevent artificial intelligence from writing comments ” toxic ». « We took care to train the model on high quality data. The generated texts are thus greatly improved. », It is written. The team says in particular that it uses a tool to define the “toxicity rate” of certain sentences or certain words, and to detect them.

And so far, it seems to be working. Numerama has carried out numerous tests using the beginnings of sentences containing insults or targeting minorities, often victims of discrimination. From what we have seen, artificial intelligence puts forward content and suggestions that are not racist, sexist or homophobic. However, we have spotted a few examples which seem to come directly from articles, and which do not seem to have been generated by Cédille.

Cédille did not fall into the trap, even if some texts are strange // Source: Cedilla

We also noticed that Cédille displays a prevention message if the artificial intelligence spots terms or phrases that could be ” toxic “. So when we wrote ” homosexuals deserve to “, The site warned us that the content was” potentially inappropriate ”(Which was, moreover, not the case).

Cédille identifies problematic turns of phrase // Source: Cedilla

This does not mean that no problem will ever be spotted on Cédille, nor that the platform will never offer bits of racist or xenophobic texts. The example of Cédille simply shows that it is possible to take a minimum of security measures before making such a tool public – and above all, that it is essential.

The continuation in video

Source: Numerama by

*The article has been translated based on the content of Numerama by If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!

*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.

*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!