a convincing fake created by “research and deployment company” OpenAI, whose Jukebox project uses artificial intelligence to generate music, complete with lyrics, in a variety of genres and artist styles. Along with Sinatra, they’ve done what are known as “deepfakes” of Katy Perry, Elvis, Simon and Garfunkel, 2Pac, Céline Dion and more. Having trained the model using 1.2m songs scraped from the web, complete with the corresponding lyrics and metadata, it can output raw audio several minutes long based on whatever you feed it. Input, say, Queen or Dolly Parton or Mozart, and you’ll get an approximation out the other end.
I don’t say it is wrong, I only say it IS.
I’ve seen the same thing with art created by computers
I mean: once you accept the extremes of Rothko, Pollock, et al, machine approximations seem plausible.
Has not poetry (d)evolved to an algorithm for selection of cultural references of varying seriousness & sincerity, new & old . . . has not all literature? Some misguided souls even distend the algorithm to extremes on both ends of the selection array, from paucity to surplus.
Once you have developed the algorithm for the extremes, fine-tuning the algorithm for the middle range is possible – not to say, easy.
How long until Video can be reconstructed the same way?
After video, how long for reality?
Introspection leads me to the conclusion that these are developments already in progress. Evidence is everywhere, in this environment where reality is defined by what is seen on screen, large or small.
Which leaves us where the movie The Matrix left us, except – once the Matrix IS, what will machines need with humans? A raison d’etre?
What will machines need with Human Reality?