I, for one, welcome our new robot overlords.
Seriously, AI and synthetic voice is creeping into audio books. Why? Lower production costs, etc. I’ve been thinking about this for a while. I recently noticed that users on Tiktok use speech readers to voice their videos — including a new one that sounds a bit like Danny Devito. These aren’t your old Stephen Hawking speech synthesizers. Nor are they even your Siri’s or whatever. They’re getting closer and closer to sounding like real humans. Also, in some of the videogames I play, the CGI is getting close to passing out the other side of the uncanny valley — that cataclysmic dip in connection and sympathy one gets for a digital simulacrum that looks real, but something is off enough to creep you the fuck out. Also, the deepfake world means we can take peoples faces and lay them over other people in a way so realistic that one of the guys who did it on Youtube with Luke Skywalker has been hired by Disney to help with their new TV series. So at what point do actual people become redundant? Remember Cipher in the OG The Matrix with his steak? Yeah, at what point will we cease to care that it’s fake?
(Pro tip: we already don’t.)
Proponents of AI audiobook narration tout its much lower production costs (compared to a traditional recording of a human narrator) as a way to improve profitability of audiobooks as well as allowing publishers to publish more audiobooks that have limited audiences. But according to actor and narrator Emily Lawrence, cofounder of PANA and president of its board of directors, “It’s very easy to reduce this issue to dollars and cents, but it’s very complicated and nuanced.” If AI narration proliferates, “it’s not just narrators who will lose their jobs,” Lawrence said. “There’s an entire ecosystem of people who rely on audiobooks for their livelihood. People who direct audiobooks, people who edit audiobooks, people who check audiobook narration for word-for-word perfection against the manuscript.”
Lawrence believes there are many ethical issues surrounding AI technology. “For example,” she notes, “if I were to license my voice, and lose all control over how my voice is then used, my voice could potentially be used to voice content that I find morally repulsive.” She also points out that “as of now, a lot of AI licensing consists of non-union contracts,” and that narrators are vulnerable to entering agreements that exploit their voices and don’t offer fair compensation.
Similarly, in Huber’s view, the negatives of AI outweigh any positives. She places “loss of livelihoods, loss of integrity in storytelling, and loss of personal connection” high on her list of concerns. “The only pros I see are financial,” she said. “And it’s the other team that benefits, not the narrators nor the listeners. Do you really think [AI company] Speechki is going to pass their savings on to the listener? No. Listeners make choices about what to spend money on, and they have a right to demand clear labeling of robot voices, as do authors. And then there is the potential theft of our voices—our speech patterns, our acting choices—to create the AI. That’s a whole other can of worms.”