Page 1 of 1

deep fakes threaten alot of thinGs...

Posted: Thu May 19, 2022 12:10 pm
by rSin
and the experts say the sky is the limit...


Deepfakes can fool biometric checks used by banks, research finds

A team of researchers has found that biometric tests used by banks and cryptocurrency exchanges to verify users’ identities can be fooled


https://www.dailydot.com/debug/biometri ... erability/

deep fakes threaten alot of thinks...

Posted: Thu Sep 29, 2022 2:58 pm
by ben ttech
roger stone is claiming that a leaked audio of him promoting mass violence is a deep fake.

depending on who does the analysis their technique will have a degree or gradient at which its ability end
this allows positive statements describing what would be needed to produce a fake they couldnt detect hence only these publically announced producers could have

or some independent actor

deep fakes threaten alot of thinGs...

Posted: Thu Sep 29, 2022 3:35 pm
by Intrinsic
What about the biometric of Common Sense. Roger stone is a convicted liar, so there's that.

deep fakes threaten alot of thinGs...

Posted: Thu Sep 29, 2022 3:50 pm
by rSin
the problem remains emense and growing.

just think of intel professionals having to wait for analysis which gives them a letter grade as to how trustable any video or audio is
theyre either consuming a huge percentage of the computer resources available to do this or their winging it...

deep fakes threaten alot of thinGs...

Posted: Thu Sep 29, 2022 6:02 pm
by rSin

deep fakes threaten alot of thinGs...

Posted: Thu Sep 29, 2022 6:21 pm
by Intrinsic
Here's how crooks will use deepfakes to scam your biz
https://www.theregister.com/2022/09/28/ ... ake_video/

deep fakes threaten alot of thinGs...

Posted: Sun Oct 02, 2022 12:27 am
by rSin
novel...


To detect audio deepfakes, we and our research colleagues at the University of Florida have developed a technique that measures the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those generated synthetically by computers.

The first step in differentiating speech produced by humans from speech generated by deepfakes is understanding how to acoustically model the vocal tract. Luckily scientists have techniques to estimate what someone -- or some being such as a dinosaur -- would sound like based on anatomical measurements of its vocal tract. We did the reverse. By inverting many of these same techniques, we were able to extract an approximation of a speaker's vocal tract during a segment of speech. This allowed us to effectively peer into the anatomy of the speaker who created the audio sample.

From here, we hypothesized that deepfake audio samples would fail to be constrained by the same anatomical limitations humans have. In other words, the analysis of deepfaked audio samples simulated vocal tract shapes that do not exist in people. Our testing results not only confirmed our hypothesis but revealed something interesting. When extracting vocal tract estimations from deepfake audio, we found that the estimations were often comically incorrect. For instance, it was common for deepfake audio to result in vocal tracts with the same relative diameter and consistency as a drinking straw, in contrast to human vocal tracts, which are much wider and more variable in shape. This realization demonstrates that deepfake audio, even when convincing to human listeners, is far from indistinguishable from human-generated speech. By estimating the anatomy responsible for creating the observed speech, it's possible to identify the whether the audio was generated by a person or a computer.


https://slashdot.org/story/22/10/01/004 ... ter-voices