How to control the growing menace of fake identities and altered reality

Simultaneously, these solutions detect fabricated audio by analysing tone, modulation, pitch, and frequency inconsistencies.
For representational purposes
For representational purposes

Today’s dynamic technological landscape, the emergence of Generative-AI and its blending with deepfake technology impacting voice, image, and video alike, has opened a landscape filled with opportunities and risks.

Although safeguarding against fake identities and altered realities is challenging due to the evolving technology landscape, there are still a few ways that can be adopted to detect deepfakes. AI fabrication detection solutions are helpful as they employ techniques such as remote Photoplethysmography (PPG) to discern fabricated media by scrutinising changes in blood flow and eye movements.

Simultaneously, these solutions detect fabricated audio by analysing tone, modulation, pitch, and frequency inconsistencies. Another method involves digital watermarking, where a digital watermark is embedded inside the audio or video file before publication by an official source.

Metadata analysis is crucial, as inconsistencies and outliers in data byte streams can be found within the fabricated content’s metadata. Researchers are also exploring biometric analysis, comparing pre-recorded biometric information (both audio and video) against fabricated counterparts to uncover irregular peaks or gaps. Additionally, a meticulous examination of fabricated videos may unveil patchy frames, unnatural expressions, lighting discrepancies, and inconsistent colour tones—all characteristics challenging the authenticity of the media.

However, to control the growing menace of deepfake, collaboration between consumers, tech experts, law enforcement, and policymakers to establish comprehensive legal frameworks and regulations is pivotal. This multi-pronged approach, including technological innovation, vigilant monitoring,! and collaborative efforts stand as a robust defence against the growing risks posed by deepfake technology.

For end-users aiming to prevent deepfakes, several techniques should be employed. First and foremost, as consumers of digital media, it is crucial to exercise caution when sharing information online and contemplate the potential for misuse. Always cross-check information with reliable sources before believing it and pay attention to general indications of fabricated audio or video before sharing. Additionally, exercise caution with content from non-reputable sources and verify the identity of individuals contacting you through digital media, especially when sensitive information is requested.

For organisations seeking to prevent deepfakes and protect their data, the adoption of appropriate measures is essential to reduce the threats posed by malicious attacks. In safeguarding data and proprietary information, organisations can implement various prevention strategies. Adding an extra layer of verification proves crucial in bolstering security, particularly when video or audio-based authentication is involved in business processes. The utilisation of biometric authentication ensures continuous monitoring and authentication of incoming voice/video prints.

Furthermore, organisations should implement robust cybersecurity measures and privacy policies to safeguard employee data, restrict internal access to sensitive information, and employ strong authentication methods. Conducting routine cybersecurity training is imperative to educate employees about deepfake risks and enhance their ability to identify and report potential threats.

Various global organisations and content development/hosting platforms are already developing software to detect deepfakes. This capability can be extended to others to control and further prevent the proliferation of fabricated media. Lastly, the implementation of the right set of policies is crucial in safeguarding against deepfake threats which enfor! ces regulations around misuse, development, distribution, and hosting of deepfake videos.

Santosh Jinugu

(The author is Partner at Deloitte India)

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com