Witness protection: cracking the voice and image
Written by Martin Biéri & Alexis Léautier
Translated by VoxProtect
04 january 2023
The issue of witness protection is not recent: there are legal measures to protect the anonymity of witnesses, especially when there is a risk of endangering their integrity (or that of their relatives). However, the technical measures that are supposed to guarantee this anonymity through the modification of audio and video information have certain limits, which are constantly being pushed back by technological progress.
The protection of the witness’s identity
While whistleblower protection has been strengthened with the Waserman Act in 2022 and the transposition of the European Directive in 2019, the question of anonymity protection in legal or journalistic contexts remains an issue, especially in light of technological advances.
While measures to protect witnesses have been in place in France for several years – to protect them from external influence, such as witness subordination or other means of pressure, but also in use, such as protecting the identity of police “informers” in official reports – they have been strengthened since 2001, allowing for the organization of the legal structure of anonymous testimony, “an important novelty inspired by the accusatory procedures of common law countries” (Citoyens et délateurs, 2005).
Thus, it contains new measures that provide a way to conceal certain information that presents a risk for witnesses. For example, witnesses can give the address of the police station rather than their own to avoid reprisals from the person they are incriminating.
This reinforcement continued in 2016, bringing out some technical clarifications in particular. Indeed, in legal proceedings (in the case of a felony or misdemeanor punishable by at least three years in prison), the witness may be required to appear, with measures to protect him or her from being re-identified: “In certain circumstances (for example, if his or her safety is no longer assured), the witness may be allowed to use an assumed name. If confronted with the suspect, the confrontation will be conducted remotely. The witness will not be visible and his/her voice will be masked. The revelation of the identity or address is punishable by 5 years in prison and a fine of 75,000 €” (service-public). These new additions are to be understood in the context of the fight against terrorism, and particularly following the attacks of 2015, as indicated in the title of the text.
Thus, in addition to keeping the individual’s identity (name, surname, address, etc.) secret, i.e. keeping it outside or beside the procedure, there are two ways to protect him: by “removing” his image (being visible) and by “masking” his voice. In the first case, not being present (” remotely “) is a simple and obvious measure: the physical absence of the individual from the court room (for example, in the confrontation) obviously protects him. Then there are other measures to degrade the image so that it does not provide any direct identification information (blurring or pixelation for example).
Regarding the voice, the technical measures used are also quite well known: it is generally about modifying the voice, by shifting it towards the high or low frequencies. It is not a question here of a degradation of the sound, but of a transposition, what is called “pitch shifting” (see also below).
These techniques can also be found in the context of television reports, in which people testify in exchange for protection of their anonymity, on more or less sensitive subjects. Several devices exist: the person can be off-camera or in the shadows, which makes it possible to have only a vague silhouette; the person can be “blurred” (a filter is added over the image or, on the contrary, the quality of the image is degraded to a greater or lesser degree in order to mask what is considered to be the most identifiable, that is, the face); the person can be anonymized by a black blindfold; he or she can also be replaced by an actor or journalist reading his or her words – or his or her words can be simply written on a card.
Inherent limitations to the technique
But are these technical measures truly sufficient? First of all, in the case of voice, it is not technically complicated to modify the voice in the opposite direction in order to quickly get closer to the real voice and thus be able to re-identify the person. This operation is available in most sound editing, recording or music creation software, including free ones. “Pitch shifting”, this famous linear modulation of the signal, seems to be an extremely weak protection measure in the context of witness protection or source protection. This technique could have been interesting when the costs of reverse engineering were important and accessible only to a limited number of people a few decades ago, which is no longer the case with the passage to the digital format and the accessibility of software.
Moreover, the voice is a variable geometry data (see the White Paper on voice assistants and our articles The rights of the voice): in addition to being a characteristic specific to everyone, it is the vehicle of the transmitted message. The way of speaking, the language tics, the accent… are all clues to be able to identify the person. And, by extension, we find all the classic issues related to data anonymization: removing directly identifying attributes is not necessarily sufficient. It is possible to re-identify a person (by inference or cross-checking) thanks to the contextual information provided in the record.
The most famous example of these loopholes is the re-identification of Sonia (who is an alias). This person who had provided information about a terrorist and helped prevent an attack in 2015 had her identity revealed. As a result, she was forced to change her name, address, etc. This incident had also led to the bill “Fight against organized crime, terrorism and their financing, and improving the efficiency and guarantees of criminal procedure” in 2015, mentioned above.
At the same time, it should be noted that the links between voice analysis and justice are multiplying: there are new specialized agents to support investigators. For example, the company Agnitio, whose Batvox software is used by several police departments in Europe, was able to authenticate the voice of Jérôme Cahuzac in the recordings revealed by Mediapart in 2013 and reused by the courts. Moreover, this episode had raised reactions in the scientific community on the reliability of the use of such devices in judicial proceedings: “Despite the constant progress of Science, researchers in the field consider almost unanimously that current methods of voice recognition are unclear,” according to J.-F. Bonastre, professor at the Computer Science Laboratory of Avignon and specialist in speech processing and voice authentication (whose interview given at the LINC in 2017 can be found here). He also reminds us in an article entitled “1990-2020: retrospectives on 30 years of exchanges around voice identification in forensic environments” that the scientific fundamentals of voice expertise are contested by academic researchers, and that a position of the francophone scientific community on the subject has not changed since a motion was voted in 1990. It is interesting to note that researchers participating in trials do so under the label of scientific witnesses and not judicial experts.
Link to the article : https://linc.cnil.fr/protection-des-temoins-casser-la-voix-et-limage