6 min read

The Write Stuff

The Write Stuff
Dall-E "got" the pun! But why colorized hands?

This week we'll look at the fun and the fear of using AI in writing fiction.

You want to support my work but don't need a keynote from a mad scientist? Please become a paid subscriber to this newsletter and recommend to friends!

Mad Science Solves...

I don’t love writing. It is a painful and thoroughly imperfect process of turning precious, hard-learned ideas into garbage. A story that soars in my mind becomes a dismal morass on the page. But, as many writers before me have learned, writing is rewriting. And so, from that messy of words that used to be my beautiful idea, I look for new words, new metaphors, and sometimes whole new ideas to serve the deeper story. I don’t love writing, but I do appreciate having had written.

The result of that painful writing process is new insight for me. Writing is like a deep version of the classic Why, Why, Why exercise, forcing me to confront the parts of my stories that only work in my head because the flaws hide in conceptual sactomas. Most of these stories are nonfiction attempts to link together the findings in my own research, combined with those of others, into a bigger understanding of a significant human experience.

How to Robot-Proof Yourself explores how AI, education, and work must all transform together, not simply for the sake of jobs but to produce better lives. The story in The Tax on Being Different concerns a paradox: difference creates value in the world, yet difference is burdened with profound costs. And in Small Sacrifices…well, the subtitle is “The science, economics, and story of Purpose”, and my favorite current chapter title is “Why do assholes rise to the top?” For me, all three of these have been powerful exercises in understanding my own work. I hope they can soon provide some of that value, and a few chuckles, to others.

Nonfiction is only one way to explore the stories that interest me. I’m working on a screenplay about how we are a story we tell ourselves and what it means to love someone if you can no longer hear that part of your own story. I also have the opening scenes of a fantasy epic, The Long Caravan, a romp through the epic fantasy and sci-fi I loved when I was younger (and still do thanks to John Scalzi, Brandon Sanderson, and others) but passed through the mind of someone who also thinks of the world in terms high-dimensional manifolds.

I write because despite the pain of the process, it opens worlds to me that I might never otherwise visit. I hope you don’t mind if I occasionally drag you along with me.

Stage & Screen

My Spring is quickly filling up (and even next Fall)!

  • This Week, DC: I'll be explaining why "fiduciary AI" must be a civil right.
  • Feb 28 online: A Q&A with the legal team at Siemens about AI & the Law.
  • Apr 16, LA: it's no longer virtual! Come see How to Robot-Proof Your Kids with the LA School Alliance.
    • btw, I'll be in LA all week: book me at a discount!
  • May 8, Boston: chatting with BCG on lifting Collective Intelligence and The Neuroscience of Trust.
  • May 8-10, Santa Clara: I return to Singularity University
  • May 23, Seoul: keynoting the Asian Leadership Conference! (from last year)
  • Mid-June, UK & EU: Buy tickets for the Future of Talent Summit and so much more!
  • July, DC: Keynote at Jobs For the Future Horizons!
    • Early Bird registration is open.

If your company, university, or conference just happen to be in one of the above locations and want the "best keynote I've ever heard" (shockingly spoken by multiple audiences last year)?

<<Please support my work: book me for a keynote or briefing!>>

Research Roundup

The Shape & Speed of Stories & Water

Stories have shape. Stories have speed. Some stories are successful. AI reveals all(-ish).

Last year, two separate papers used deep learning to embed movie and TV scripts into high-dimensional semantic spaces to analyze their structure. Each measures the shape and speed of the narratives in terms of the semantic similarity between the beats in the screenplay.

The first analysis focused on “the shape of stories”. It found that “movies and TV shows that move faster [through semantic space] are liked more”, but that “TV shows that cover more ground are liked less”. So, aspiring authors that want AI to write the perfect screenplay for you, keep it simple and move fast.

The model, however, showed the opposite effect when it applied to academic papers. Those that moved “faster are cited less”, while “papers that cover more ground or are more circuitous are cited more”. (What do you do if you want to write a smart science-driven screenplay? (Just asking for a friend.))

I will note that my narratives tend to be fast, circuitous, and cover too much ground, and yet people keep paying me unseemly sums of money to give talks and write books. Is the above finding causal or just a correlation? Maybe people don’t seem like dense shows because most of them are poorly written crap.

Perhaps addressing the value of complexity, the second paper focused on the “speed of stories” with great nuance. Across “10,000 TV episodes” and “40,000 movie scripts”, it found that “slower semantic progression is beneficial at the beginning of narratives”, but as the story developed, “faster semantic progression…toward the end” increased success.

The two recent strikes in Hollywood by the writers and actors unions were about many issues, including the dramatically changing economics of streaming compared to traditional production. But one of the major issues was specifically the role of AI in the business of entertainment. In fact, I was asked by various groups involved in those negotiations to offer briefings on the possible futures of AI in the creative industry. I can’t share those conversations, but an article in the NYT provided an interesting insight into how these negotiations approached the question of AI in writing.

The Writers Guild of America made negotiations on the role of AI a central demand and the resulting contract “establishes the precedent that workers can and should have a say in when and how they use artificial intelligence at work.” One might say the same of globalization and other economic trends. I lean towards economic liberalization but far too often its adherents treat workers as a systemic flaw to be eliminated rather than a precious resource to be fostered.

One of the main tenets of the new contract is that “if A.I. raises writers’ productivity or the quality of their output, guild members should snare an equitable share of the performance gains.” Further, “the parties agreed that A.I. is not a writer. The studios cannot use A.I. in place of a credited and paid guild member.” If the studio or the writers themselves make use of AI assistance, the writers would “receive the same minimum pay they would have had they written the piece from scratch.”

The outcome could easily be seen as a win for the writers, and the screen actors guild achieved a similar outcome in their negotiations by preventing the use of actors’ AI avatars without compensation. But I genuinely believe that this is a solid outcome for the producers as well. I’ve heard many TV show runners say that there are tens of thousands of people in LA who can write a script (and four more, now the GTP, Claude, Gemini, and Palm2 have arrived in town with Hollywood dreams). What those bosses are desperate to find is a writer with a new idea. By respecting the business value of creativity more than the speed of another genetic script, the producers have avoided dragging the industry even lower than a nonstop stream of reality TV.


Vivienne L'Ecuyer Ming

Follow more of my work at
Socos Labs The Human Trust
Dionysus Health Optoceutics
RFK Human Rights GenderCool
Crisis Venture Studios Inclusion Impact Index
Neurotech Collider Hub at UC Berkeley