Sora, Markov Chain, and AI Angst- Part 1

Sora, Markov Chain, and AI Angst- Part 1
Scanning the Matrix for signs of singularity

The release of OpenAI’s Sora has generated both excitement and consternation. And while the fears are nothing if not predictable, it does make me wonder what it is about AI that causes so much consternation..

So, I did the only reasonable thing: I asked ChatGPT. The response consisted of eight main reasons why people are scared of AI. I will ambitiously (and perhaps over-optimistically) attempt to address each one in two separate blog posts. 

Loss of Jobs

One of the most immediate concerns is the fear that AI and automation will lead to widespread job loss. As AI systems become more capable of performing tasks traditionally done by humans, from manufacturing to even more complex cognitive jobs, there's a worry that humans will be increasingly displaced from the workforce. -ChatGPT

It’s certainly not unreasonable to assume that automating tasks and workflows with AI would displace human employees and eliminate roles. And I’m sure to some extent this has happened already and will in the future; though I fear, as with so many issues that are emotionally charged, confirmation bias trumps good faith attempts to measure causality. 

In any event, this issue is not unique to AI. The biggest paradigm shifts in tech (e.g., microprocessors, smart phones, etc.) have always resulted in disruption that reverberates throughout the economy and society. And once the technology is ubiquitous, it essentially becomes invisible. We’re already seeing this. I would venture to say that the majority of people are unaware of how much AI is already being used in the workplace. Much less that they’re using it whenever an email goes to Spam or Amazon makes a recommendation. 

The most important takeaway is that AI will continue on its current trajectory. To quote the title of a 2023 article in the Harvard Business Review by Karim Lakhani: “AI Won’t Replace Humans — But Humans With AI Will Replace Humans Without AI”.

Artificial intelligence for ChatGPT

Loss of Control 

Popular culture often depicts scenarios where AI becomes too intelligent or self-aware, leading to situations where humans can no longer control or predict AI behavior. This fear is rooted in the concern that AI might one day surpass human intelligence (a concept known as the singularity), potentially leading to outcomes that could harm humanity- ChatGPT

This is what I like to think of as “the Skynet scenario”. Recently, it seems that Artificial General Intelligence has replaced singularity as the AI doom du jour. In his 2023 State of AI Report, Ian Hogwarth defines AGI as “God-like AI.” Fittingly, this was also covered in a Futurism article.

I like the argument put forth by Cambridge’s Dr. Tom McClelland. His view is that we’re not in a position to make an informed judgment about the likelihood of conscious AI, since we don’t really understand our own human consciousness. The MIT Technology Review makes a similar point regarding the difficulty in coming up with a single test or theory to determine consciousness. 

This idea was given a thorough treatment last year in a paper published on Cornell’s arXiv co-authored by 19 prominent computer scientists, neuroscientists, and philosophers. They propose 14 criteria that can be applied to existing AI architectures to evaluate consciousness. However, the authors are clear that this framework can suggest but not prove. Adeel Razi, a Canadian computational neuroscientist, referred to this as beginning the discussion rather than coming up with an answer. 

Text to speech AI

The difficulty humans have in defining our own consciousness reminds me of Bertrand Russel’s turkey illusion he used to explain Hume’s problem of induction. This is a great illustration of how AI complements us. While we can’t be certain the sun will rise tomorrow based on past sunrises, AI gets us a lot closer. Perhaps the realization that AI addresses a pre-existing human need would make it feel more like natural technological progress than a dangerous entity. 

Finally, the technology underlying GenAI has been around for decades. A good example is the Markov chain. To predict text, a Markov model would generate the next word in a sentence by looking at the previous word, or perhaps a few previous words. The basic models behind ChatGPT work in a similar way. What has really changed is capacity. Quoting MIT professor Tommi Jaakkola: “We were generating things way before the last decade, but the major distinction here is in terms of the complexity of objects we can generate and the scale at which we can train these models.” Put another way, the focus has shifted from identifying the machine learning algorithm that makes the best use of a specific dataset to training models on huge datasets with millions and even billions of data points. 

Sora is fundamentally a diffusion model with transformer architecture. Diffusion models were introduced by researchers at Stanford in 2015. Transformer architecture was introduced by Google researchers in 2017. Good reasons why people should treat Sora as a potential benefit and not a looming threat. 


Let’s meet back here tomorrow for Part 2. And possibly the day after for a Part 3.

Read more